WorldWideScience

Sample records for blind deconvolution techniques

  1. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  2. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  3. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  4. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  5. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  6. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  7. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  8. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  9. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  10. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  11. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  12. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  13. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  15. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  16. A HOS-based blind deconvolution algorithm for the improvement of time resolution of mixed phase low SNR seismic data

    International Nuclear Information System (INIS)

    Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M

    2009-01-01

    A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six

  17. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  18. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  19. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  20. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  1. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  2. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  3. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  4. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  5. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  6. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  7. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  8. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  9. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  10. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  11. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  12. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  13. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  14. Application of blind deconvolution with crest factor for recovery of original rolling element bearing defect signals

    International Nuclear Information System (INIS)

    Son, J. D.; Yang, B. S.; Tan, A. C. C.; Mathew, J.

    2004-01-01

    Many machine failures are not detected well in advance due to the masking of background noise and attenuation of the source signal through the transmission mediums. Advanced signal processing techniques using adaptive filters and higher order statistics have been attempted to extract the source signal from the measured data at the machine surface. In this paper, blind deconvolution using the Eigenvector Algorithm (EVA) technique is used to recover a damaged bearing signal using only the measured signal at the machine surface. A damaged bearing signal corrupted by noise with varying signal-to-noise (s/n) was used to determine the effectiveness of the technique in detecting an incipient signal and the optimum choice of filter length. The results show that the technique is effective in detecting the source signal with an s/n ratio as low as 0.21, but requires a relatively large filter length

  15. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  16. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  17. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  18. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  19. Blind deconvolution of seismograms regularized via minimum support

    International Nuclear Information System (INIS)

    Royer, A A; Bostock, M G; Haber, E

    2012-01-01

    The separation of earthquake source signature and propagation effects (the Earth’s ‘Green’s function’) that encode a seismogram is a challenging problem in seismology. The task of separating these two effects is called blind deconvolution. By considering seismograms of multiple earthquakes from similar locations recorded at a given station and that therefore share the same Green’s function, we may write a linear relation in the time domain u i (t)*s j (t) − u j (t)*s i (t) = 0, where u i (t) is the seismogram for the ith source and s j (t) is the jth unknown source. The symbol * represents the convolution operator. From two or more seismograms, we obtain a homogeneous linear system where the unknowns are the sources. This system is subject to a scaling constraint to deliver a non-trivial solution. Since source durations are not known a priori and must be determined, we augment our system by introducing the source durations as unknowns and we solve the combined system (sources and source durations) using separation of variables. Our solution is derived using direct linear inversion to recover the sources and Newton’s method to recover source durations. This method is tested using two sets of synthetic seismograms created by convolution of (i) random Gaussian source-time functions and (ii) band-limited sources with a simplified Green’s function and signal to noise levels up to 10% with encouraging results. (paper)

  20. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  1. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  2. Blind phase retrieval for aberrated linear shift-invariant imaging systems

    International Nuclear Information System (INIS)

    Yu, Rotha P; Paganin, David M

    2010-01-01

    We develop a means to reconstruct an input complex coherent scalar wavefield, given a through focal series (TFS) of three intensity images output from a two-dimensional (2D) linear shift-invariant optical imaging system with unknown aberrations. This blind phase retrieval technique unites two methods, namely (i) TFS phase retrieval and (ii) iterative blind deconvolution. The efficacy of our blind phase retrieval procedure has been demonstrated using simulated data, for a variety of Poisson noise levels.

  3. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  4. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  5. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  6. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  7. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  8. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  9. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  10. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring

    Directory of Open Access Journals (Sweden)

    Naixue Xiong

    2017-01-01

    Full Text Available Single-image blind deblurring for imaging sensors in the Internet of Things (IoT is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  11. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  12. Semi-blind sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  13. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  14. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  15. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  16. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  17. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  18. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  19. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  20. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  1. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  2. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  3. Probabilistic blind deconvolution of non-stationary sources

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals...

  4. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  5. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Science.gov (United States)

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  6. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  7. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  8. Multichannel blind iterative image restoration

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Flusser, Jan

    2003-01-01

    Roč. 12, č. 9 (2003), s. 1094-1106 ISSN 1057-7149 R&D Projects: GA ČR GA102/00/1711 Institutional research plan: CEZ:AV0Z1075907 Keywords : conjugate gradient * half-quadratic regularization * multichannel blind deconvolution Subject RIV: BD - Theory of Information Impact factor: 2.642, year: 2003 http://library.utia.cas.cz/prace/20030104.pdf

  9. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  10. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  11. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  12. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  13. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  14. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  15. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  16. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  17. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  18. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI.

    Science.gov (United States)

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn

    2016-03-01

    One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.

  19. Fruit fly optimization based least square support vector regression for blind image restoration

    Science.gov (United States)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  20. INTRAVAL project phase 2. Analysis of STRIPA 3D data by a deconvolution technique

    International Nuclear Information System (INIS)

    Ilvonen, M.; Hautojaervi, A.; Paatero, P.

    1994-09-01

    The data analysed in this report were obtained in tracer experiments performed from a specially excavated drift in good granite rock at the level of 360 m below the ground in the Stripa mine. Tracer transport paths from the injection points to the collecting sheets at the tunnel walls were tens of meters long. Data for six tracers that arrived in measurable concentrations were elaborated by different means of data analysis to reveal the transport behaviour of solutes in the rock fractures. Techniques like direct inversion of the data, Fourier analysis, Singular Value Decomposition (SVD) and non-negative least squares fitting (NNLS) were employed. A newly developed code based on a general-purpose approach for solving deconvolution-type or integral equation problems, Extreme Value Estimation (EVE), proved to be a very helpful tool in deconvolving impulse responses from the injection flow rates and break-through curves of tracers and assessing the physical confidence of the results. (23 refs., 33 figs.)

  1. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  2. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  3. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  4. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  5. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  6. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  7. Stochastic Blind Motion Deblurring

    KAUST Repository

    Xiao, Lei

    2015-05-13

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can therefore only be obtained with the help of prior information in the form of (often non-convex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with PSNR values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms.

  8. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  9. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  10. Blind I/Q imbalance compensation technique for direct-conversion digital radio transceivers

    CSIR Research Space (South Africa)

    De Witt, JJ

    2009-05-01

    Full Text Available . Digital signal processing techniques have widely been proposed to compensate for these mixer imperfections. Of these techniques, the class of blind compensation techniques seems very attractive since no test signals are required. This paper presents a...

  11. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    Science.gov (United States)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  12. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    International Nuclear Information System (INIS)

    Geloni, G.; Saldin, E.L.; Schneidmiller, E.A.; Yurkov, M.V.

    2004-01-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function

  13. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  14. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  15. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  16. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  17. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  18. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  19. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  20. A New Technique of Removing Blind Spots to Optimize Wireless Coverage in Indoor Area

    Directory of Open Access Journals (Sweden)

    A. W. Reza

    2013-01-01

    Full Text Available Blind spots (or bad sampling points in indoor areas are the positions where no signal exists (or the signal is too weak and the existence of a receiver within the blind spot decelerates the performance of the communication system. Therefore, it is one of the fundamental requirements to eliminate the blind spots from the indoor area and obtain the maximum coverage while designing the wireless networks. In this regard, this paper combines ray-tracing (RT, genetic algorithm (GA, depth first search (DFS, and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage using minimum number of transmitters. The proposed system outperforms the existing techniques in terms of algorithmic complexity and demonstrates that the computation time can be reduced as high as 99% and 75%, respectively, as compared to existing algorithms. Moreover, in terms of experimental analysis, the coverage prediction successfully reaches 99% and, thus, the proposed coverage model effectively guarantees the removal of blind spots.

  1. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  2. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  3. Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique

    Science.gov (United States)

    Järvinen, S. P.; Berdyugina, S. V.

    2010-10-01

    Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which

  4. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  5. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  6. Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf

  7. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  8. Analysis of blind identification methods for estimation of kinetic parameters in dynamic medical imaging

    Science.gov (United States)

    Riabkov, Dmitri

    Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of

  9. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  10. 2D Presentation Techniques of Mind-maps for Blind Meeting Participants.

    Science.gov (United States)

    Pölzer, Stephan; Miesenberger, Klaus

    2015-01-01

    Mind-maps, used as ideation technique in co-located meetings (e.g. in brainstorming sessions), which meet with increased importance in business and education, show considerably accessibility challenges for blind meeting participants. Besides an overview of general aspects of accessibility issues in co-located meetings, this paper focuses on the design and development of alternative non-visual presentation techniques for mind-maps. The different aspects of serialized presentation techniques (e.g. treeview) for Braille and audio rendering and two dimensional presentation techniques (e.g. tactile two dimensional array matrix and edge-projection method [1]) are discussed based on the user feedback gathered in intermediate tests following a user centered design approach.

  11. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  12. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  13. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  14. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  15. Bayesian Blind Separation and Deconvolution of Dynamic Image Sequences Using Sparsity Priors

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 34, č. 1 (2015), s. 258-266 ISSN 0278-0062 R&D Projects: GA ČR GA13-29225S Keywords : Functional imaging * Blind source separation * Computer-aided detection and diagnosis * Probabilistic and statistical methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.756, year: 2015 http://library.utia.cas.cz/separaty/2014/AS/tichy-0431090.pdf

  16. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.

  17. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  18. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  19. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  20. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  1. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  2. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  3. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  4. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  5. Blind Analysis in Particle Physics

    International Nuclear Information System (INIS)

    Roodman, A

    2003-01-01

    A review of the blind analysis technique, as used in particle physics measurements, is presented. The history of blind analyses in physics is briefly discussed. Next the dangers of and the advantages of a blind analysis are described. Three distinct kinds of blind analysis in particle physics are presented in detail. Finally, the BABAR collaboration's experience with the blind analysis technique is discussed

  6. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  7. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Brett [Institute of Biomedical Studies, Baylor University, Waco, TX 76798 (United States); Neumann, Elizabeth K. [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States); Stow, Sarah M.; May, Jody C.; McLean, John A. [Department of Chemistry, Vanderbilt University, Nashville, TN 37235 (United States); Vanderbilt Institute of Chemical Biology, Nashville, TN 37235 (United States); Vanderbilt Institute for Integrative Biosystems Research and Education, Nashville, TN 37235 (United States); Center for Innovative Technology, Nashville, TN 37235 (United States); Solouki, Touradj, E-mail: Touradj_Solouki@baylor.edu [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States)

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.{sub 8} Å{sup 2}, 295.{sub 1} Å{sup 2}, 296.{sub 8} Å{sup 2}, and 300.{sub 1} Å{sup 2}; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  8. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    International Nuclear Information System (INIS)

    Harper, Brett; Neumann, Elizabeth K.; Stow, Sarah M.; May, Jody C.; McLean, John A.; Solouki, Touradj

    2016-01-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288._8 Å"2, 295._1 Å"2, 296._8 Å"2, and 300._1 Å"2; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  9. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  10. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  11. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  12. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  13. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  14. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  15. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  16. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  17. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  18. Blind Multiuser Detection by Kurtosis Maximization for Asynchronous Multirate DS/CDMA Systems

    Directory of Open Access Journals (Sweden)

    Peng Chun-Hsien

    2006-01-01

    Full Text Available Chi et al. proposed a fast kurtosis maximization algorithm (FKMA for blind equalization/deconvolution of multiple-input multiple-output (MIMO linear time-invariant systems. This algorithm has been applied to blind multiuser detection of single-rate direct-sequence/code-division multiple-access (DS/CDMA systems and blind source separation (or independent component analysis. In this paper, the FKMA is further applied to blind multiuser detection for multirate DS/CDMA systems. The ideas are to properly formulate discrete-time MIMO signal models by converting real multirate users into single-rate virtual users, followed by the use of FKMA for extraction of virtual users' data sequences associated with the desired user, and recovery of the data sequence of the desired user from estimated virtual users' data sequences. Assuming that all the users' spreading sequences are given a priori, two multirate blind multiuser detection algorithms (with either a single receive antenna or multiple antennas, which also enjoy the merits of superexponential convergence rate and guaranteed convergence of the FKMA, are proposed in the paper, one based on a convolutional MIMO signal model and the other based on an instantaneous MIMO signal model. Some simulation results are then presented to demonstrate their effectiveness and to provide a performance comparison with some existing algorithms.

  19. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  20. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI

    Czech Academy of Sciences Publication Activity Database

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, M.; Standara, M.; Starčuk jr., Zenon; Taxt, T.

    2016-01-01

    Roč. 75, č. 3 (2016), s. 1355-1365 ISSN 0740-3194 R&D Projects: GA ČR GAP102/12/2380; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : dynamic contrast-enhanced magnetic resonance imaging * multi-channel blind deconvolution * arterial input function * impulse residue function * renal cell carcinoma Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.924, year: 2016

  1. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  3. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  4. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  5. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  6. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  7. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  8. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  9. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  10. Modified Particle Swarm Optimization for Blind Deconvolution and Identification of Multichannel FIR Filters

    Directory of Open Access Journals (Sweden)

    Khanagha Ali

    2010-01-01

    Full Text Available Blind identification of MIMO FIR systems has widely received attentions in various fields of wireless data communications. Here, we use Particle Swarm Optimization (PSO as the update mechanism of the well-known inverse filtering approach and we show its good performance compared to original method. Specially, the proposed method is shown to be more robust against lower SNR scenarios or in cases with smaller lengths of available data records. Also, a modified version of PSO is presented which further improves the robustness and preciseness of PSO algorithm. However the most important promise of the modified version is its drastically faster convergence compared to standard implementation of PSO.

  11. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  12. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  13. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  14. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  15. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  16. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  17. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  18. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  19. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  20. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  1. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  2. A Blind High-Capacity Wavelet-Based Steganography Technique for Hiding Images into other Images

    Directory of Open Access Journals (Sweden)

    HAMAD, S.

    2014-05-01

    Full Text Available The flourishing field of Steganography is providing effective techniques to hide data into different types of digital media. In this paper, a novel technique is proposed to hide large amounts of image data into true colored images. The proposed method employs wavelet transforms to decompose images in a way similar to the Human Visual System (HVS for more secure and effective data hiding. The designed model can blindly extract the embedded message without the need to refer to the original cover image. Experimental results showed that the proposed method outperformed all of the existing techniques not only imperceptibility but also in terms of capacity. In fact, the proposed technique showed an outstanding performance on hiding a secret image whose size equals 100% of the cover image while maintaining excellent visual quality of the resultant stego-images.

  3. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  4. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  5. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  6. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  7. Blinding for unanticipated signatures

    NARCIS (Netherlands)

    D. Chaum (David)

    1987-01-01

    textabstractPreviously known blind signature systems require an amount of computation at least proportional to the number of signature types, and also that the number of such types be fixed in advance. These requirements are not practical in some applications. Here, a new blind signature technique

  8. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  9. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  10. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  11. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  12. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  13. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  14. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  15. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  16. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  17. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  18. Development of tactile floor plan for the blind and the visually impaired by 3D printing technique

    Directory of Open Access Journals (Sweden)

    Raša Urbas

    2016-07-01

    Full Text Available The aim of the research was to produce tactile floor plans for blind and visually impaired people for the use in the museum. For the production of tactile floor plans 3D printing technique was selected among three different techniques. 3D prints were made of white and colored ABS polymer materials. Development of different elements of tactile floor plans, as well as the problems and the solutions during 3D printing, are described in the paper.

  19. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  20. Utilization of the statistics techniques for the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger electronic spectra's deconvolutions

    International Nuclear Information System (INIS)

    Puentes, M.B.

    1987-01-01

    For the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger spectra, it is important to performe the peaks' separation and estimate its intensity. For this purpose, a methodology was implemented, including: a spectrum's filter; b) substraction of the base line (or inelastic background); c) deconvolution (separation of the distribution that integrates the spectrum) and d) error of calculation of the mean estimation, comprising adjustment quality tests. A software (FORTRAN IV plus) that permits to use the methodology proposed from the experimental spectra was implemented. The quality of the methodology was tested with simulated spectra. (Author) [es

  1. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  2. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  3. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  4. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    Science.gov (United States)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  5. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  6. The blind pushing technique for peripherally inserted central catheter placement through brachial vein puncture.

    Science.gov (United States)

    Lee, Jae Myeong; Cho, Young Kwon; Kim, Han Myun; Song, Myung Gyu; Song, Soon-Young; Yeon, Jae Woo; Yoon, Dae Young; Lee, Sam Yeol

    2018-03-01

    The objective of this study was to conduct a prospective clinical trial evaluating the technical feasibility and short-term clinical outcome of the blind pushing technique for placement of pretrimmed peripherally inserted central catheters (PICCs) through brachial vein access. Patients requiring PICC placement at any of the three participating institutions were prospectively enrolled between January and December 2016. The review boards of all participating institutions approved this study, and informed consent was obtained from all patients. PICC placement was performed using the blind pushing technique and primary brachial vein access. The following data were collected from unified case report forms: access vein, obstacles during PICC advancement, procedure time, and postprocedural complications. During the 12-month study period, 1380 PICCs were placed in 1043 patients. Of these, 1092 PICCs placed in 837 patients were enrolled, with 834 PICCs (76%) and 258 PICCs (34%) placed through brachial vein and nonbrachial vein access, respectively. In both arms, obstacles were most commonly noted in the subclavian veins (n = 220) and axillary veins (n = 94). Successful puncture of the access vein was achieved at first try in 1028 PICCs (94%). The technical success rate was 99%, with 1055 PICCs (97%) placed within 120 seconds of procedure time and 1088 PICCs (99%) having the tip located at the ideal position. Follow-up Doppler ultrasound detected catheter-associated upper extremity deep venous thrombosis (UEDVT) for 18 PICCs in 16 patients and late symptomatic UEDVT for 16 PICCs in 16 patients (3.1%). Catheter-associated UEDVT was noted for 28 PICCs (82%) and 6 PICCs (18%) placed through brachial vein and nonbrachial vein access, respectively. The incidence of obstacles and the procedure time (pushing technique and primary brachial vein access is technically feasible and may represent an alternative to the conventional PICC placement technique, having low incidences of

  7. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  8. Iterated Gate Teleportation and Blind Quantum Computation.

    Science.gov (United States)

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  9. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  10. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  11. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  12. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  13. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  14. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  15. Enhancement of Twins Fetal ECG Signal Extraction Based on Hybrid Blind Extraction Techniques

    Directory of Open Access Journals (Sweden)

    Ahmed Kareem Abdullah

    2017-07-01

    Full Text Available ECG machines are noninvasive system used to measure the heartbeat signal. It’s very important to monitor the fetus ECG signals during pregnancy to check the heat activity and to detect any problem early before born, therefore the monitoring of ECG signals have clinical significance and importance. For multi-fetal pregnancy case the classical filtering algorithms are not sufficient to separate the ECG signals between mother and fetal. In this paper the mixture consists of mixing from three ECG signals, the first signal is the mother ECG (M-ECG signal, second signal the Fetal-1 ECG (F1-ECG, and third signal is the Fetal-2 ECG (F2-ECG, these signals are extracted based on modified blind source extraction (BSE techniques. The proposed work based on hybridization between two BSE techniques to ensure that the extracted signals separated well. The results demonstrate that the proposed work very efficiently to extract the useful ECG signals

  16. The use of deconvolution techniques to identify the fundamental mixing characteristics of urban drainage structures.

    Science.gov (United States)

    Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G

    2010-01-01

    Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.

  17. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    International Nuclear Information System (INIS)

    Floberg, J M; Holden, J E

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)

  18. Change Blindness Phenomena for Virtual Reality Display Systems.

    Science.gov (United States)

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  19. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  20. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  1. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  2. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  3. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  4. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  5. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  6. Measurement and deconvolution of detector response time for short HPM pulses: Part 1, Microwave diodes

    International Nuclear Information System (INIS)

    Bolton, P.R.

    1987-06-01

    A technique is described for measuring and deconvolving response times of microwave diode detection systems in order to generate corrected input signals typical of an infinite detection rate. The method has been applied to cases of 2.86 GHz ultra-short HPM pulse detection where pulse rise time is comparable to that of the detector; whereas, the duration of a few nanoseconds is significantly longer. Results are specified in terms of the enhancement of equivalent deconvolved input voltages for given observed voltages. The convolution integral imposes the constraint of linear detector response to input power levels. This is physically equivalent to the conservation of integrated pulse energy in the deconvolution process. The applicable dynamic range of a microwave diode is therefore limited to a smaller signal region as determined by its calibration

  7. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  8. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  9. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  10. Efficacy and complications associated with a modified inferior alveolar nerve block technique. A randomized, triple-blind clinical trial.

    Science.gov (United States)

    Montserrat-Bosch, Marta; Figueiredo, Rui; Nogueira-Magalhães, Pedro; Arnabat-Dominguez, Josep; Valmaseda-Castellón, Eduard; Gay-Escoda, Cosme

    2014-07-01

    To compare the efficacy and complication rates of two different techniques for inferior alveolar nerve blocks (IANB). A randomized, triple-blind clinical trial comprising 109 patients who required lower third molar removal was performed. In the control group, all patients received an IANB using the conventional Halsted technique, whereas in the experimental group, a modified technique using a more inferior injection point was performed. A total of 100 patients were randomized. The modified technique group showed a significantly higher onset time in the lower lip and chin area, and was frequently associated to a lingual electric discharge sensation. Three failures were recorded, 2 of them in the experimental group. No relevant local or systemic complications were registered. Both IANB techniques used in this trial are suitable for lower third molar removal. However, performing an inferior alveolar nerve block in a more inferior position (modified technique) extends the onset time, does not seem to reduce the risk of intravascular injections and might increase the risk of lingual nerve injuries.

  11. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  12. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  13. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  14. Blind Naso-Endotracheal Intubation

    African Journals Online (AJOL)

    Difficult endotracheal intubation techniques include, use of fiberoptic bronchoscope, intubating laryngeal mask airway, tracheostomy, blind nasotracheal and retrograde intubation. According to the Difficult Airway Society guidelines, intubating with the aid of a fiberoptic scope has taken its place as the standard adjuvant for.

  15. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  16. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  17. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  18. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  19. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  20. A deconvolution method for deriving the transit time spectrum for ultrasound propagation through cancellous bone replica models.

    Science.gov (United States)

    Langton, Christian M; Wille, Marie-Luise; Flegg, Mark B

    2014-04-01

    The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland-Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.

  1. RESTORATION TECHNIQUE FOR PLEIADES-HR PANCHROMATIC IMAGES

    Directory of Open Access Journals (Sweden)

    C. Latry

    2012-07-01

    Full Text Available 17th of December 2011 from Kourou Space Centre, French Guyana. Like others high resolution optical satellites, it acquires both panchromatic images, with 70cm spatial resolution, and lower resolution multispectral images with 2.8m spatial resolution. Pleiades-HR is an optimized system, which means that the Modulation Transfer Function has a low value at Nyquist frequency, in order to reduce both the telescope diameter and aliasing effects. Shannon sampling condition is thus met at first order, which also makes classical ground processing, such as image matching or resampling, more justified for a mathematical point of view. Raw images are thus blurry which implies a deconvolution stage that restores sharpness but also increases the noise level in the high frequency domain. A denoising step, based upon wavelet packet coefficients thresholding/shrinkage technique, allows controlling the final noise level. Each of these methods includes numerous parameters that have to be assessed during the inflight commissioning period: deconvolution filter that depends on MTF assessment, instrumental noise model, noise level target for denoised images, wavelet packet decomposition level. This paper aims to precisely describe the deconvolution/denoising algorithms and how their main parameters have been set up during the inflight commissioning stage. Special attention will be given to structured noise induced by Pleiades-HR on board wavelet-based compression algorithm

  2. Study of the lifetime of the TL peaks of quartz: comparison of the deconvolution using the first order kinetic with the initial rise method

    International Nuclear Information System (INIS)

    RATOVONJANAHARY, A.J.F.

    2005-01-01

    Quartz is a thermoluminescent material which can be used for dating and/or for dosimetry. This material has been used since 60s for dating samples like pottery, flint, etc., but the method is still subject to some improvement. One of the problem of thermoluminescence dating is the estimation of the lifetime of the ''used peak'' . The application of the glow-curve deconvolution (GCD) technique for the analysis of a composite thermoluminescence glow curve into its individual glow peaks has been applied widely since the 80s. Many functions describing a single glow peak have been proposed. For analysing quartz behaviour, thermoluminescence glow-curve deconvolution (GCD) functions are compared for first order of kinetic. The free parameters of the GCD functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be obtained experimentally. The activation energy (E) is the additional free parameter. The lifetime (τ) of each glow peak, which is an important factor for dating, is calculated from these three parameters. For ''used'' ''peak'' lifetime analysis, GCD results are compared to those from initial rise method (IRM). Results vary fairly from method to method. [fr

  3. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  4. Single-blind trial addressing the differential effects of two reflexology techniques versus rest, on ankle and foot oedema in late pregnancy.

    Science.gov (United States)

    Mollart, L

    2003-11-01

    This single-blind randomised controlled trial explored the differential effects of two different foot reflexology techniques with a period of rest on oedema-relieving effects and symptom relief in healthy pregnant women with foot oedema. Fifty-five women in the third trimester were randomly assigned to one of the three groups: a period of rest, 'relaxing' reflexology techniques or a specific 'lymphatic' reflexology technique for 15 min with pre- and post-therapy ankle and foot circumference measurements and participant questionnaire. There was no statistically significant difference in the circumference measurements between the three groups; however, the lymphatic technique reflexology group mean circumference measurements were all decreased. A significant reduction in the women's symptom mean measurements in all groups (preflexology techniques, relaxing reflexology techniques and a period of rest had a non-significant oedema-relieving effect. From the women's viewpoint, lymphatic reflexology was the preferred therapy with significant increase in symptom relief.

  5. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  6. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  7. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  8. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  9. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  10. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  11. The blind leading the blind: use and misuse of blinding in randomized controlled trials.

    Science.gov (United States)

    Miller, Larry E; Stewart, Morgan E

    2011-03-01

    The use of blinding strengthens the credibility of randomized controlled trials (RCTs) by minimizing bias. However, there is confusion surrounding the definition of blinding as well as the terms single, double, and triple blind. It has been suggested that these terms should be discontinued due to their broad misinterpretation. We recommend that, instead of abandoning the use of these terms, explicit definitions of blinding should be adopted. We address herein the concept of blinding, propose standard definitions for the consistent use of these terms, and detail when different types of blinding should be utilized. Standardizing the definition of blinding and utilizing proper blinding methods will improve the quality and clarity of reporting in RCTs. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  13. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  14. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  15. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  16. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  17. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    Science.gov (United States)

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  18. High Resolution Imaging of the Sun with CORONAS-1

    Science.gov (United States)

    Karovska, Margarita

    1998-01-01

    We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.

  19. Recovering the fine structures in solar images

    Science.gov (United States)

    Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.

    1994-01-01

    Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.

  20. A Dynamic Tap Allocation for Concurrent CMA-DD Equalizers

    Directory of Open Access Journals (Sweden)

    Trindade DiegovonBM

    2010-01-01

    Full Text Available Abstract This paper proposes a dynamic tap allocation for the concurrent CMA-DD equalizer as a low complexity solution for the blind channel deconvolution problem. The number of taps is a crucial factor which affects the performance and the complexity of most adaptive equalizers. Generally an equalizer requires a large number of taps in order to cope with long delays in the channel multipath profile. Simulations show that the proposed new blind equalizer is able to solve the blind channel deconvolution problem with a specified and reduced number of active taps. As a result, it minimizes the output excess mean square error due to inactive taps during and after the equalizer convergence and the hardware complexity as well.

  1. Double blind randomised controlled trial of two different breathing techniques in the management of asthma.

    Science.gov (United States)

    Slader, C A; Reddel, H K; Spencer, L M; Belousova, E G; Armour, C L; Bosnic-Anticevich, S Z; Thien, F C K; Jenkins, C R

    2006-08-01

    Previous studies have shown that breathing techniques reduce short acting beta(2) agonist use and improve quality of life (QoL) in asthma. The primary aim of this double blind study was to compare the effects of breathing exercises focusing on shallow nasal breathing with those of non-specific upper body exercises on asthma symptoms, QoL, other measures of disease control, and inhaled corticosteroid (ICS) dose. This study also assessed the effect of peak flow monitoring on outcomes in patients using breathing techniques. After a 2 week run in period, 57 subjects were randomised to one of two breathing techniques learned from instructional videos. During the following 30 weeks subjects practised their exercises twice daily and as needed for relief of symptoms. After week 16, two successive ICS downtitration steps were attempted. The primary outcome variables were QoL score and daily symptom score at week 12. Overall there were no clinically important differences between the groups in primary or secondary outcomes at weeks 12 or 28. The QoL score remained unchanged (0.7 at baseline v 0.5 at week 28, p = 0.11 both groups combined), as did lung function and airway responsiveness. However, across both groups, reliever use decreased by 86% (p0.10 between groups). Peak flow monitoring did not have a detrimental effect on asthma outcomes. Breathing techniques may be useful in the management of patients with mild asthma symptoms who use a reliever frequently, but there is no evidence to favour shallow nasal breathing over non-specific upper body exercises.

  2. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  3. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  4. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  5. Blind Deconvolution for Jump-Preserving Curve Estimation

    Directory of Open Access Journals (Sweden)

    Xingfang Huang

    2010-01-01

    when recovering the signals. Our procedure is based on three local linear kernel estimates of the regression function, constructed from observations in a left-side, a right-side, and a two-side neighborhood of a given point, respectively. The estimated function at the given point is then defined by one of the three estimates with the smallest weighted residual sum of squares. To better remove the noise and blur, this estimate can also be updated iteratively. Performance of this procedure is investigated by both simulation and real data examples, from which it can be seen that our procedure performs well in various cases.

  6. A Study of Color Transformation on Website Images for the Color Blind

    OpenAIRE

    Siew-Li Ching; Maziani Sabudin

    2010-01-01

    In this paper, we study on color transformation method on website images for the color blind. The most common category of color blindness is red-green color blindness which is viewed as beige color. By transforming the colors of the images, the color blind can improve their color visibility. They can have a better view when browsing through the websites. To transform colors on the website images, we study on two algorithms which are the conversion techniques from RGB colo...

  7. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  8. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    International Nuclear Information System (INIS)

    Kiktenko, Evgeniy O.

    2017-01-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  9. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  10. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  11. Effects of thermal treatment on the MgxZn1−xO films and fabrication of visible-blind and solar-blind ultraviolet photodetectors

    International Nuclear Information System (INIS)

    Tian, Chunguang; Jiang, Dayong; Tan, Zhendong; Duan, Qian; Liu, Rusheng; Sun, Long; Qin, Jieming; Hou, Jianhua; Gao, Shang; Liang, Qingcheng; Zhao, Jianxun

    2014-01-01

    Highlights: • Single-phase wurtzite/cubic Mg x Zn 1−x O films were grown by RF magnetron sputtering technique. • We focus on the red-shift caused by annealing the Mg x Zn 1−x O films. • MSM-structured visible-blind and solar-blind UV photodetectors were fabricated. - Abstract: A series of single-phase Mg x Zn 1−x O films with different Mg contents were prepared on quartz substrates by RF magnetron sputtering technique using different MgZnO targets, and annealed under the atmospheric environment. The absorption edges of Mg x Zn 1−x O films can cover the whole near ultraviolet and even the whole solar-blind spectra range, and the solar-blind wurtzite/cubic Mg x Zn 1−x O films have been realized successfully by the same method. In addition, the absorption edges of annealed films shift to a long wavelength, which is caused by the diffusion of Zn atoms gathering at the surface during the thermal treatment process. Finally, the truly solar-blind metal-semiconductor-metal structured photodetectors based on wurtzite Mg 0.445 Zn 0.555 O and cubic Mg 0.728 Zn 0.272 O films were fabricated. The corresponding peak responsivities are 17 mA/W at 275 nm and 0.53 mA/W at 250 nm under a 120 V bias, respectively

  12. Double blind randomised controlled trial of two different breathing techniques in the management of asthma

    Science.gov (United States)

    Slader, C A; Reddel, H K; Spencer, L M; Belousova, E G; Armour, C L; Bosnic‐Anticevich, S Z; Thien, F C K; Jenkins, C R

    2006-01-01

    Background Previous studies have shown that breathing techniques reduce short acting β2 agonist use and improve quality of life (QoL) in asthma. The primary aim of this double blind study was to compare the effects of breathing exercises focusing on shallow nasal breathing with those of non‐specific upper body exercises on asthma symptoms, QoL, other measures of disease control, and inhaled corticosteroid (ICS) dose. This study also assessed the effect of peak flow monitoring on outcomes in patients using breathing techniques. Methods After a 2 week run in period, 57 subjects were randomised to one of two breathing techniques learned from instructional videos. During the following 30 weeks subjects practised their exercises twice daily and as needed for relief of symptoms. After week 16, two successive ICS downtitration steps were attempted. The primary outcome variables were QoL score and daily symptom score at week 12. Results Overall there were no clinically important differences between the groups in primary or secondary outcomes at weeks 12 or 28. The QoL score remained unchanged (0.7 at baseline v 0.5 at week 28, p = 0.11 both groups combined), as did lung function and airway responsiveness. However, across both groups, reliever use decreased by 86% (p0.10 between groups). Peak flow monitoring did not have a detrimental effect on asthma outcomes. Conclusion Breathing techniques may be useful in the management of patients with mild asthma symptoms who use a reliever frequently, but there is no evidence to favour shallow nasal breathing over non‐specific upper body exercises. PMID:16517572

  13. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  14. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  15. Milestones on the road to independence for the blind

    Science.gov (United States)

    Reed, Kenneth

    1997-02-01

    Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'

  16. Analysis of low-pass filters for approximate deconvolution closure modelling in one-dimensional decaying Burgers turbulence

    Science.gov (United States)

    San, O.

    2016-01-01

    The idea of spatial filtering is central in approximate deconvolution large-eddy simulation (AD-LES) of turbulent flows. The need for low-pass filters naturally arises in the approximate deconvolution approach which is based solely on mathematical approximations by employing repeated filtering operators. Two families of low-pass spatial filters are studied in this paper: the Butterworth filters and the Padé filters. With a selection of various filtering parameters, variants of the AD-LES are systematically applied to the decaying Burgers turbulence problem, which is a standard prototype for more complex turbulent flows. Comparing with the direct numerical simulations, it is shown that all forms of the AD-LES approaches predict significantly better results than the under-resolved simulations at the same grid resolution. However, the results highly depend on the selection of the filtering procedure and the filter design. It is concluded that a complete attenuation for the smallest scales is crucial to prevent energy accumulation at the grid cut-off.

  17. Blind Quantum Signature with Blind Quantum Computation

    Science.gov (United States)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  18. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  19. Preparation of the teacher for the blind and visually impaired for teaching the techniques of the use of the white cane

    OpenAIRE

    Lakota, Sara

    2013-01-01

    Due to the loss of vision an individual needs to acquire specific skills for a safe and effective travel, which he can gain through a systematic orientation and mobility training and the training of long cane techniques. Knowledge of the filed of orientation and mobility and long cane travel goes deep into history, but the systematic teaching only emerged after the Second World War, due to the needs of the blind war veterans. The safety and quality of life of an individual depend on the outco...

  20. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  1. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  2. The Tilburg double blind randomised controlled trial comparing inguinal hernia repair according to Lichtenstein and the transinguinal preperitoneal technique

    Directory of Open Access Journals (Sweden)

    Gerritsen Pieter G

    2009-09-01

    Full Text Available Abstract Background Anterior open treatment of the inguinal hernia with a tension free mesh has reduced the incidence of recurrence and direct postoperative pain. The Lichtenstein procedure rules nowadays as reference technique for hernia treatment. Not recurrences but chronic pain is the main postoperative complication in inguinal hernia repair after Lichtenstein's technique. Preliminary experiences with a soft mesh placed in the preperitoneal space showed good results and less chronic pain. Methods The TULIP is a double-blind randomised controlled trial in which 300 patients will be randomly allocated to anterior inguinal hernia repair according to Lichtenstein or the transinguinal preperitoneal technique with soft mesh. All unilateral primary inguinal hernia patients eligible for operation who meet inclusion criteria will be invited to participate in this trial. The primary endpoint will be direct postoperative- and chronic pain. Secondary endpoints are operation time, postoperative complications, hospital stay, costs, return to daily activities (e.g. work and recurrence. Both groups will be evaluated. Success rate of hernia repair and complications will be measured as safeguard for quality. To demonstrate that inguinal hernia repair according to the transinguinal preperitoneal (TIPP technique reduces postoperative pain to Discussion The TULIP trial is aimed to show a reduction in postoperative chronic pain after anterior hernia repair according to the transinguinal preperitoneal (TIPP technique, compared to Lichtenstein. In our hypothesis the TIPP technique reduces chronic pain compared to Lichtenstein. Trial registration ISRCTN 93798494

  3. Visualizing the blind brain: brain imaging of visual field defects from early recovery to rehabilitation techniques

    Directory of Open Access Journals (Sweden)

    Marika eUrbanski

    2014-09-01

    Full Text Available Visual field defects (VFDs are one of the most common consequences observed after brain injury, especially after a stroke in the posterior cerebral artery territory. Less frequently, tumours, traumatic brain injury, brain surgery or demyelination can also determine various visual disabilities, from a decrease in visual acuity to cerebral blindness. VFD is a factor of bad functional prognosis as it compromises many daily life activities (e.g., obstacle avoidance, driving, and reading and therefore the patient’s quality of life. Spontaneous recovery seems to be limited and restricted to the first six months, with the best chance of improvement at one month. The possible mechanisms at work could be partly due to cortical reorganization in the visual areas (plasticity and/or partly to the use of intact alternative visual routes, first identified in animal studies and possibly underlying the phenomenon of blindsight. Despite processes of early recovery, which is rarely complete, and learning of compensatory strategies, the patient’s autonomy may still be compromised at more chronic stages. Therefore, various rehabilitation therapies based on neuroanatomical knowledge have been developed to improve VFDs. These use eye-movement training techniques (e.g., visual search, saccadic eye movements, reading training, visual field restitution (the Vision Restoration Therapy, VRT, or perceptual learning. In this review, we will focus on studies of human adults with acquired VFDs, which have used different imaging techniques (Positron Emission Tomography: PET, Diffusion Tensor Imaging: DTI, functional Magnetic Resonance Imaging: fMRI, MagnetoEncephalography: MEG or neurostimulation techniques (Transcranial Magnetic Stimulation: TMS; transcranial Direct Current Stimulation, tDCS to show brain activations in the course of spontaneous recovery or after specific rehabilitation techniques.

  4. Force and moment reconstruction for a nuclear transportation cask using sum of weighted accelerations and deconvolution theory

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Bateman, V.; Carne, T.G.; Gregory, D.L.; Attaway, S.W.; Bronowski, D.R.

    1989-01-01

    A 9-m drop test was conducted of a 1/3-scale-model spent fuel cask onto an unyielding target. The structural response of the impact limiters and attachments was evaluated. A mass model of the cask body, with steel-sheathed redwood and balsa impact limiters, was tested in a 10-degree slapdown orientation. One end of the cask impact the target before the other end, with higher deceleration forces resulting from the second impact. The information desired from this test is the deformation of the two impact limiters on either end of the cask as a function of the applied force. The content in this paper will only discuss a summary of the applied force calculations. Additional details about the force and moment reconstruction methods and analysis results and test and hardware are provided elsewhere. Two new force reconstruction techniques were applied to the slapdown test data: the sum of weighted accelerations technique (SWAT) and deconvolution (DECON). The rigid-body acceleration is then multiplied by the cask mass to obtain an estimate of the applied force. The frequency content of this force is restricted to the cut-off frequency of the digital filter, typically about one-half of the lowest elastic mode of the cask. The new force reconstruction techniques demonstrate the potential for a better estimate of forces acting on the cask during the impact than the conventional method. The new force reconstruction techniques use the cask structure as a generalized force transducer. With these techniques, the elastic vibration response of the cask is eliminated from the acceleration data. The main advantages of the force reconstruction techniques are the extension of the frequency bandwidth (due to the elimination of the elastic modal response in that bandwidth) and the preservation of the force rise time

  5. Effect of novel inhaler technique reminder labels on the retention of inhaler technique skills in asthma: a single-blind randomized controlled trial.

    Science.gov (United States)

    Basheti, Iman A; Obeidat, Nathir M; Reddel, Helen K

    2017-02-09

    Inhaler technique can be corrected with training, but skills drop off quickly without repeated training. The aim of our study was to explore the effect of novel inhaler technique labels on the retention of correct inhaler technique. In this single-blind randomized parallel-group active-controlled study, clinical pharmacists enrolled asthma patients using controller medication by Accuhaler [Diskus] or Turbuhaler. Inhaler technique was assessed using published checklists (score 0-9). Symptom control was assessed by asthma control test. Patients were randomized into active (ACCa; THa) and control (ACCc; THc) groups. All patients received a "Show-and-Tell" inhaler technique counseling service. Active patients also received inhaler labels highlighting their initial errors. Baseline data were available for 95 patients, 68% females, mean age 44.9 (SD 15.2) years. Mean inhaler scores were ACCa:5.3 ± 1.0; THa:4.7 ± 0.9, ACCc:5.5 ± 1.1; THc:4.2 ± 1.0. Asthma was poorly controlled (mean ACT scores ACCa:13.9 ± 4.3; THa:12.1 ± 3.9; ACCc:12.7 ± 3.3; THc:14.3 ± 3.7). After training, all patients had correct technique (score 9/9). After 3 months, there was significantly less decline in inhaler technique scores for active than control groups (mean difference: Accuhaler -1.04 (95% confidence interval -1.92, -0.16, P = 0.022); Turbuhaler -1.61 (-2.63, -0.59, P = 0.003). Symptom control improved significantly, with no significant difference between active and control patients, but active patients used less reliever medication (active 2.19 (SD 1.78) vs. control 3.42 (1.83) puffs/day, P = 0.002). After inhaler training, novel inhaler technique labels improve retention of correct inhaler technique skills with dry powder inhalers. Inhaler technique labels represent a simple, scalable intervention that has the potential to extend the benefit of inhaler training on asthma outcomes. REMINDER LABELS IMPROVE INHALER TECHNIQUE: Personalized

  6. POSTERIOR SEGMENT CAUSES OF BLINDNESS AMONG CHILDREN IN BLIND SCHOOLS

    Directory of Open Access Journals (Sweden)

    Sandhya

    2015-09-01

    Full Text Available BACKGROUND: It is estimated that there are 1.4 million irreversibly blind children in the world out of which 1 million are in Asia alone. India has the highest number of blind children than any other country. Nearly 70% of the childhood blindness is avoidable. There i s paucity of data available on the causes of childhood blindness. This study focuses on the posterior segment causes of blindness among children attending blind schools in 3 adjacent districts of Andhra Pradesh. MATERIAL & METHODS: This is a cross sectiona l study conducted among 204 blind children aged 6 - 16 years age. Detailed eye examination was done by the same investigator to avoid bias. Posterior segment examination was done using a direct and/or indirect ophthalmoscope after dilating pupil wherever nec essary. The standard WHO/PBL for blindness and low vision examination protocol was used to categorize the causes of blindness. A major anatomical site and underlying cause was selected for each child. The study was carried out during July 2014 to June 2015 . The results were analyzed using MS excel software and Epi - info 7 software version statistical software. RESULTS: Majority of the children was found to be aged 13 - 16 years (45.1% and males (63.7%. Family history of blindness was noted in 26.0% and consa nguinity was reported in 29.9% cases. A majority of them were belonged to fulfill WHO grade of blindness (73.0% and in majority of the cases, the onset of blindness was since birth (83.7%. The etiology of blindness was unknown in majority of cases (57.4% while hereditary causes constituted 25.4% cases. Posterior segment causes were responsible in 33.3% cases with retina being the most commonly involved anatomical site (19.1% followed by optic nerve (14.2%. CONCLUSIONS: There is a need for mandatory oph thalmic evaluation, refraction and assessment of low vision prior to admission into blind schools with periodic evaluation every 2 - 3 years

  7. Perception of blindness and blinding eye conditions in rural communities.

    Science.gov (United States)

    Ashaye, Adeyinka; Ajuwon, Ademola Johnson; Adeoti, Caroline

    2006-01-01

    PURPOSE: The purpose of this qualitative study was to explore the causes and management of blindness and blinding eye conditions as perceived by rural dwellers of two Yoruba communities in Oyo State, Nigeria. METHODS: Four focus group discussions were conducted among residents of Iddo and Isale Oyo, two rural Yoruba communities in Oyo State, Nigeria. Participants consisted of sighted, those who were partially or totally blind and community leaders. Ten patent medicine sellers and 12 traditional healers were also interviewed on their perception of the causes and management of blindness in their communities. FINDINGS: Blindness was perceived as an increasing problem among the communities. Multiple factors were perceived to cause blindness, including germs, onchocerciasis and supernatural forces. Traditional healers believed that blindness could be cured, with many claiming that they had previously cured blindness in the past. However, all agreed that patience was an important requirement for the cure of blindness. The patent medicine sellers' reports were similar to those of the traditional healers. The barriers to use of orthodox medicine were mainly fear, misconception and perceived high costs of care. There was a consensus of opinion among group discussants and informants that there are severe social and economic consequences of blindness, including not been able to see and assess the quality of what the sufferer eats, perpetual sadness, loss of sleep and dependence on other persons for daily activities. CONCLUSION: Local beliefs associated with causation, symptoms and management of blindness and blinding eye conditions among rural Yoruba communities identified have provided a bridge for understanding local perspectives and basis for implementing appropriate primary eye care programs. PMID:16775910

  8. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    International Nuclear Information System (INIS)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Gibb, Andrew G.; Halpern, Mark; Marsden, Gaelen; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; France, Kevin; Gundersen, Joshua O.; Hughes, David H.; Martin, Peter G.; Olmi, Luca

    2011-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12 CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting

  9. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  10. What is Color Blindness?

    Science.gov (United States)

    ... Color Blindness? Who Is at Risk for Color Blindness? Color Blindness Causes Color Blindness Diagnosis and Treatment How Color Blindness Is Tested What Is Color Blindness? Leer en Español: ¿Qué es el daltonismo? Written ...

  11. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  12. Road following for blindBike: an assistive bike navigation system for low vision persons

    Science.gov (United States)

    Grewe, Lynne; Overell, William

    2017-05-01

    Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.

  13. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  14. Turning the tide of corneal blindness

    Directory of Open Access Journals (Sweden)

    Matthew S Oliva

    2012-01-01

    Full Text Available Corneal diseases represent the second leading cause of blindness in most developing world countries. Worldwide, major investments in public health infrastructure and primary eye care services have built a strong foundation for preventing future corneal blindness. However, there are an estimated 4.9 million bilaterally corneal blind persons worldwide who could potentially have their sight restored through corneal transplantation. Traditionally, barriers to increased corneal transplantation have been daunting, with limited tissue availability and lack of trained corneal surgeons making widespread keratoplasty services cost prohibitive and logistically unfeasible. The ascendancy of cataract surgical rates and more robust eye care infrastructure of several Asian and African countries now provide a solid base from which to dramatically expand corneal transplantation rates. India emerges as a clear global priority as it has the world′s largest corneal blind population and strong infrastructural readiness to rapidly scale its keratoplasty numbers. Technological modernization of the eye bank infrastructure must follow suit. Two key factors are the development of professional eye bank managers and the establishment of Hospital Cornea Recovery Programs. Recent adaptation of these modern eye banking models in India have led to corresponding high growth rates in the procurement of transplantable tissues, improved utilization rates, operating efficiency realization, and increased financial sustainability. The widespread adaptation of lamellar keratoplasty techniques also holds promise to improve corneal transplant success rates. The global ophthalmic community is now poised to scale up widespread access to corneal transplantation to meet the needs of the millions who are currently blind.

  15. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    Science.gov (United States)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  16. Effects of thermal treatment on the Mg{sub x}Zn{sub 1−x}O films and fabrication of visible-blind and solar-blind ultraviolet photodetectors

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Chunguang [School of Materials Science and Engineering, Changchun University of Science and Technology, Changchun 130022 (China); Jiang, Dayong, E-mail: dayongjiangcust@126.com [School of Materials Science and Engineering, Changchun University of Science and Technology, Changchun 130022 (China); Tan, Zhendong [The Metrology Technology Institute of Jilin, Changchun 132013 (China); Duan, Qian; Liu, Rusheng; Sun, Long; Qin, Jieming; Hou, Jianhua; Gao, Shang; Liang, Qingcheng; Zhao, Jianxun [School of Materials Science and Engineering, Changchun University of Science and Technology, Changchun 130022 (China)

    2014-12-15

    Highlights: • Single-phase wurtzite/cubic Mg{sub x}Zn{sub 1−x}O films were grown by RF magnetron sputtering technique. • We focus on the red-shift caused by annealing the Mg{sub x}Zn{sub 1−x}O films. • MSM-structured visible-blind and solar-blind UV photodetectors were fabricated. - Abstract: A series of single-phase Mg{sub x}Zn{sub 1−x}O films with different Mg contents were prepared on quartz substrates by RF magnetron sputtering technique using different MgZnO targets, and annealed under the atmospheric environment. The absorption edges of Mg{sub x}Zn{sub 1−x}O films can cover the whole near ultraviolet and even the whole solar-blind spectra range, and the solar-blind wurtzite/cubic Mg{sub x}Zn{sub 1−x}O films have been realized successfully by the same method. In addition, the absorption edges of annealed films shift to a long wavelength, which is caused by the diffusion of Zn atoms gathering at the surface during the thermal treatment process. Finally, the truly solar-blind metal-semiconductor-metal structured photodetectors based on wurtzite Mg{sub 0.445}Zn{sub 0.555}O and cubic Mg{sub 0.728}Zn{sub 0.272}O films were fabricated. The corresponding peak responsivities are 17 mA/W at 275 nm and 0.53 mA/W at 250 nm under a 120 V bias, respectively.

  17. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  18. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  19. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  20. Prospective, Double-Blind Evaluation of Umbilicoplasty Techniques Using Conventional and Crowdsourcing Methods.

    Science.gov (United States)

    van Veldhuisen, Charlotte L; Kamali, Parisa; Wu, Winona; Becherer, Babette E; Sinno, Hani H; Ashraf, Azra A; Ibrahim, Ahmed M S; Tobias, Adam; Lee, Bernard T; Lin, Samuel J

    2017-12-01

    Umbilical reconstruction is an important component of deep inferior epigastric perforator (DIEP) flap breast reconstruction. This study evaluated the aesthetics of three different umbilical reconstruction techniques during DIEP flap breast reconstruction. From January to April of 2013, a total of 29 consecutive patients undergoing DIEP flap breast reconstruction were randomized intraoperatively to receive one of three umbilicoplasty types: a diamond, an oval, or an inverted V incision. Independent plastic surgeons and members of the general public, identified using an online "crowdsourcing" platform, evaluated aesthetic outcomes in a blinded fashion. Reviewers were shown postoperative photographs of the umbilicus of all patients and a four-point Likert scale was used to rate the new umbilicus on the size, scar formation, shape, localization, and overall appearance. Results for the focus group of independent plastic surgeons and 377 members of the public were retrieved (n = 391). A total of 10 patients (34.5 percent) were randomized into having the diamond incision, 10 (34.5 percent) had the oval incision, and nine (31.0 percent) had the inverted V incision. Patients were well matched in terms of overall characteristics. The general public demonstrated a significant preference for the oval incision in all five parameters. There was no preference identified among surgeons. This study provides evidence that a sample of the U.S. general public prefers the aesthetics of the oval umbilicoplasty incision, which contrasted with the lack of preference identified within this focus group of plastic surgeons. Therapeutic, II.

  1. Increasing the darkfield contrast-to-noise ratio using a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging.

    Science.gov (United States)

    Weber, Thomas; Pelzer, Georg; Bayer, Florian; Horn, Florian; Rieger, Jens; Ritter, André; Zang, Andrea; Durst, Jürgen; Anton, Gisela; Michel, Thilo

    2013-07-29

    A novel information retrieval algorithm for X-ray grating-based phase-contrast imaging based on the deconvolution of the object and the reference phase stepping curve (PSC) as proposed by Modregger et al. was investigated in this paper. We applied the method for the first time on data obtained with a polychromatic spectrum and compared the results to those, received by applying the commonly used method, based on a Fourier analysis. We confirmed the expectation, that both methods deliver the same results for the absorption and the differential phase image. For the darkfield image, a mean contrast-to-noise ratio (CNR) increase by a factor of 1.17 using the new method was found. Furthermore, the dose saving potential was estimated for the deconvolution method experimentally. It is found, that for the conventional method a dose which is higher by a factor of 1.66 is needed to obtain a similar CNR value compared to the novel method. A further analysis of the data revealed, that the improvement in CNR and dose efficiency is due to the superior background noise properties of the deconvolution method, but at the cost of comparability between measurements at different applied dose values, as the mean value becomes dependent on the photon statistics used.

  2. Causes of blindness in blind unit of the school for the handicapped ...

    African Journals Online (AJOL)

    To describe the causes of blindness in pupils and staff in the blind unit of the School for the Handicapped in Kwara State. 2. To identify problems in the blind school and initiate intervention. All the blind or visually challenged people in the blind unit of the school for the handicapped were interviewed and examined using a ...

  3. Blindness following bleb-related infection in open angle glaucoma.

    Science.gov (United States)

    Yamada, Hiroki; Sawada, Akira; Kuwayama, Yasuaki; Yamamoto, Tetsuya

    2014-11-01

    To estimate the risk of blindness following bleb-related infection after trabeculectomy with mitomycin C in open angle glaucoma, utilizing data obtained from two prospective multicenter studies. The incidence of bleb-related infection in open angle glaucoma after the first or second glaucoma surgery was calculated using a Kaplan-Meier analysis and data from the Collaborative Bleb-related Infection Incidence and Treatment Study (CBIITS). The rate of blindness following bleb-related infection was calculated using data from the Japan Glaucoma Society Survey of Bleb-related Infection (JGSSBI). Finally, the rate of blindness following bleb-related infection after filtering surgery was estimated based on the above two data sets. Blindness was defined as an eye with a visual acuity of 0.04 or less. The incidences of development of bleb-related infection at 5 years were 2.6 ± 0.7 % (calculated cumulative incidence ± standard error) for all infections and 0.9 ± 0.4 % for endophthalmitis in all cases in the CBIITS data. The rates of blindness in the JGSSBI data were 14 % for the total cases with bleb-related infection and 30 % for the endophthalmitis subgroup. The rate of blindness developing within 5 years following trabeculectomy was estimated to be approximately 0.24-0.36 %. The rate of blindness following bleb-related infection within 5 years after trabeculectomy is considerable and thus careful consideration must be given to the indication for trabeculectomy and the selection of surgical techniques.

  4. Using Adaptive Tools and Techniques to Teach a Class of Students Who Are Blind or Low-Vision

    Science.gov (United States)

    Supalo, Cary A.; Mallouk, Thomas E.; Amorosi, Christeallia; Lanouette, James; Wohlers, H. David; McEnnis, Kathleen

    2009-01-01

    A brief overview of the 2007 National Federation of the Blind-Jernigan Institute Youth Slam Chemistry Track, a course of study within a science camp that provided firsthand experimental experience to 200 students who are blind and low-vision, is given. For many of these students, this was their first hands-on experience with laboratory chemistry.…

  5. New Physical Constraints for Multi-Frame Blind Deconvolution

    Science.gov (United States)

    2014-12-10

    Laboratory) Dr. Julian Christou (Large Binocular Telescope Observatory) REAL ACADEMIA DE CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115... CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115 BARCELONA, 08002 SPAIN 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  6. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  7. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  8. Euro Banknote Recognition System for Blind People.

    Science.gov (United States)

    Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael

    2017-01-20

    This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  9. Blind CP-OFDM and ZP-OFDM Parameter Estimation in Frequency Selective Channels

    Directory of Open Access Journals (Sweden)

    Vincent Le Nir

    2009-01-01

    Full Text Available A cognitive radio system needs accurate knowledge of the radio spectrum it operates in. Blind modulation recognition techniques have been proposed to discriminate between single-carrier and multicarrier modulations and to estimate their parameters. Some powerful techniques use autocorrelation- and cyclic autocorrelation-based features of the transmitted signal applying to OFDM signals using a Cyclic Prefix time guard interval (CP-OFDM. In this paper, we propose a blind parameter estimation technique based on a power autocorrelation feature applying to OFDM signals using a Zero Padding time guard interval (ZP-OFDM which in particular excludes the use of the autocorrelation- and cyclic autocorrelation-based techniques. The proposed technique leads to an efficient estimation of the symbol duration and zero padding duration in frequency selective channels, and is insensitive to receiver phase and frequency offsets. Simulation results are given for WiMAX and WiMedia signals using realistic Stanford University Interim (SUI and Ultra-Wideband (UWB IEEE 802.15.4a channel models, respectively.

  10. Navigation Problems in Blind-to-Blind Pedestrians Tele-assistance Navigation

    OpenAIRE

    Balata , Jan; Mikovec , Zdenek; Maly , Ivo

    2015-01-01

    International audience; We raise a question whether it is possible to build a large-scale navigation system for blind pedestrians where a blind person navigates another blind person remotely by mobile phone. We have conducted an experiment, in which we observed blind people navigating each other in a city center in 19 sessions. We focused on problems in the navigator’s attempts to direct the traveler to the destination. We observed 96 problems in total, classified them on the basis of the typ...

  11. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    Science.gov (United States)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  12. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  13. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  14. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  15. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  16. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  17. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  18. Countermeasures Against Blinding Attack on Superconducting Nanowire Detectors for QKD

    Directory of Open Access Journals (Sweden)

    Elezov M.S.

    2015-01-01

    Full Text Available Nowadays, the superconducting single-photon detectors (SSPDs are used in Quantum Key Distribution (QKD instead of single-photon avalanche photodiodes. Recently bright-light control of the SSPD has been demonstrated. This attack employed a “backdoor” in the detector biasing technique. We developed the autoreset system which returns the SSPD to superconducting state when it is latched. We investigate latched state of the SSPD and define limit conditions for effective blinding attack. Peculiarity of the blinding attack is a long nonsingle photon response of the SSPD. It is much longer than usual single photon response. Besides, we need follow up response duration of the SSPD. These countermeasures allow us to prevent blind attack on SSPDs for Quantum Key Distribution.

  19. Emerging optical nanoscopy techniques

    Directory of Open Access Journals (Sweden)

    Montgomery PC

    2015-09-01

    Full Text Available Paul C Montgomery, Audrey Leong-Hoi Laboratoire des Sciences de l'Ingénieur, de l'Informatique et de l'Imagerie (ICube, Unistra-CNRS, Strasbourg, France Abstract: To face the challenges of modern health care, new imaging techniques with subcellular resolution or detection over wide fields are required. Far field optical nanoscopy presents many new solutions, providing high resolution or detection at high speed. We present a new classification scheme to help appreciate the growing number of optical nanoscopy techniques. We underline an important distinction between superresolution techniques that provide improved resolving power and nanodetection techniques for characterizing unresolved nanostructures. Some of the emerging techniques within these two categories are highlighted with applications in biophysics and medicine. Recent techniques employing wider angle imaging by digital holography and scattering lens microscopy allow superresolution to be achieved for subcellular and even in vivo, imaging without labeling. Nanodetection techniques are divided into four subcategories using contrast, phase, deconvolution, and nanomarkers. Contrast enhancement is illustrated by means of a polarized light-based technique and with strobed phase-contrast microscopy to reveal nanostructures. Very high sensitivity phase measurement using interference microscopy is shown to provide nanometric surface roughness measurement or to reveal internal nanometric structures. Finally, the use of nanomarkers is illustrated with stochastic fluorescence microscopy for mapping intracellular structures. We also present some of the future perspectives of optical nanoscopy. Keywords: microscopy, imaging, superresolution, nanodetection, biophysics, medical imaging

  20. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  1. Evaluation of pre-jamming indication parameter during blind backfilling technique

    Directory of Open Access Journals (Sweden)

    Susmita Panda

    2016-01-01

    Full Text Available Hydraulic blind backfilling is used to reduce subsidence problems above old underground water-logged coal mines. This paper describes experimental research on a fully transparent model of a straight underground mine gallery. An automatic data acquisition system was installed in the model to continuously record the sand and water flowrates along with the inlet pressure of the slurry near the model's inlet. Pressure signature graphs and pressure loss curves with bed advancement under different flow conditions are examined. Pressure signature analyses for various flowrates and sand slurry concentrations are conducted to evaluate a pre-jamming indication parameter, which could be used to indicate the arrival of the final stage of filling.

  2. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  3. A Handbook for Parents of Deaf-Blind Children.

    Science.gov (United States)

    Esche, Jeanne; Griffin, Carol

    The handbook for parents of deaf blind children describes practical techniques of child care for such activities as sitting, standing, walking, sleeping, washing, eating, dressing, toilet training, disciplining, and playing. For instance, it is explained that some visually handicapped children acquire mannerisms in their early years because they…

  4. Double skin façade: Modelling technique and influence of venetian blinds on the airflow and heat transfer

    International Nuclear Information System (INIS)

    Iyi, Draco; Hasan, Reaz; Penlington, Roger; Underwood, Chris

    2014-01-01

    The demand to reduce building cooling load and annual energy consumption can be optimised with the use of Double Skin Facade (DSF). Computational Fluid Dynamics (CFD) methods are frequently used for the analysis of heat transfer through DSF. However, considerable uncertainty exists regarding few key parameters, such as modelling strategies and the solar heat transmitted to the indoor space as a function of the blind tilt angles and positioning within the façade channel. In this paper we have investigated four modelling strategies and the influence of blind tilt angle and their proximity to the façade walls. The DSF system used in this investigation is equipped with venetian blinds and facades that absorb and reflect the incident solar radiation and transfer the direct solar heat gain into the building. A finite volume discretization method with the SIMPLE solution algorithm of the velocity-pressure coupling involving the low-turbulence k–ε model is used. A ray-traced solar model is coupled with long wave radiation model to solve the complete solar and radiation fields along with convection and conduction fields. On the modelling strategies, three dimensional domains were cast over three computational zones; external zone with solar radiation entering the outer skin of glass; buoyancy-driven air cavity zone with convection and transmitted solar radiation; and an internal zone. Also investigated is the thermal behaviour of the DSF due to the blind tilt angles (30°, 45°, 60°, and 75°) and its position from the facade walls (104 mm, 195 mm, 287 mm and 379 mm). Validations of the results are based on experimental data from the literature and the predicted trends compared very well with the experimental measurements. The heat gain due to direct solar radiation and convection through the facades to the internal space are presented. Comparative analysis of the four modelling strategies shows little variation of the results. The implication is a reduction in

  5. Euro Banknote Recognition System for Blind People

    Directory of Open Access Journals (Sweden)

    Larisa Dunai Dunai

    2017-01-01

    Full Text Available This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.

  6. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  7. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  8. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  9. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    International Nuclear Information System (INIS)

    Greenberg, M.; Ebel, D.S.

    2009-01-01

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of ∼15 (micro)m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 (micro)m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  10. Childhood blindness at a school for the blind in Riyadh, Saudi Arabia.

    Science.gov (United States)

    Kotb, Amgad A; Hammouda, Ehab F; Tabbara, Khalid F

    2006-02-01

    To determine the major causes of eye diseases leading to visual loss and blindness among children attending a school for the blind in Riyadh, Saudi Arabia. A total of 217 school children with visual disabilities attending a school for the blind in Riyadh were included. All children were brought to The Eye Center, Riyadh, and had complete ophthalmologic examinations including visual acuity testing, biomicroscopy, ophthalmoscopy, tonometry and laboratory investigations. In addition, some patients were subjected to electroretinography (ERG), electrooculography (EOG), measurement of visual evoked potentials (VEP), and laboratory work-up for congenital disorders. There were 117 male students with an age range of 6-19 years and a mean age of 16 years. In addition, there were 100 females with an age range of 6-18 years and a mean age of 12 years. Of the 217 children, 194 (89%) were blind from genetically determined diseases or congenital disorders and 23 (11%) were blind from acquired diseases. The major causes of bilateral blindness in children were retinal degeneration, congenital glaucoma, and optic atrophy. The most common acquired causes of childhood blindness were infections and trauma. The etiological pattern of childhood blindness in Saudi Arabia has changed from microbial keratitis to genetically determined diseases of the retina and optic nerve. Currently, the most common causes of childhood blindness are genetically determined causes. Consanguineous marriages may account for the autosomal recessive disorders. Public education programs should include information for the prevention of trauma and genetic counseling. Eye examinations for preschool and school children are mandatory for the prevention and cure of blinding disorders.

  11. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  12. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  13. Building a future full of opportunity for blind youths in science

    Science.gov (United States)

    Beck-Winchatz, B.; Riccobono, M. A.

    Like their sighted peers many blind students in elementary middle and high school are naturally interested in space This interest can motivate them to learn fundamental scientific quantitative and critical thinking skills and sometimes even lead to careers in SMET disciplines However these students are often at a disadvantage in science because of the ubiquity of important graphical information that is generally not available in accessible formats the unfamiliarity of teachers with non-visual teaching methods lack of access to blind role models and the low expectations of their teachers and parents In this presentation we will describe joint efforts by NASA and the National Federation of the Blind s NFB Center for Blind Youth in Science to develop and implement strategies to promote opportunities for blind youth in science These include the development of tactile space science books and curriculum materials science academies for blind middle school and high school students internship and mentoring programs as well as research on non-visual learning techniques This partnership with the NFB exemplifies the effectiveness of collaborations between NASA and consumer-directed organizations to improve opportunities for underserved and underrepresented individuals Session participants will also have the opportunity to examine some of the recently developed tactile space science education materials themselves

  14. Rapid Assessment of Avoidable Blindness in Western Rwanda: Blindness in a Postconflict Setting

    OpenAIRE

    Mathenge, Wanjiku; Nkurikiye, John; Limburg, Hans; Kuper, Hannah

    2007-01-01

    Editors' Summary Background. VISION 2020, a global initiative that aims to eliminate avoidable blindness, has estimated that 75% of blindness worldwide is treatable or preventable. The WHO estimates that in Africa, around 9% of adults aged over 50 are blind. Some data suggest that people living in regions affected by violent conflict are more likely to be blind than those living in unaffected regions. Currently no data exist on the likely prevalence of blindness in Rwanda, a central African c...

  15. Global data on blindness.

    Science.gov (United States)

    Thylefors, B.; Négrel, A. D.; Pararajasegaram, R.; Dadzie, K. Y.

    1995-01-01

    Globally, it is estimated that there are 38 million persons who are blind. Moreover, a further 110 million people have low vision and are at great risk of becoming blind. The main causes of blindness and low vision are cataract, trachoma, glaucoma, onchocerciasis, and xerophthalmia; however, insufficient data on blindness from causes such as diabetic retinopathy and age-related macular degeneration preclude specific estimations of their global prevalence. The age-specific prevalences of the major causes of blindness that are related to age indicate that the trend will be for an increase in such blindness over the decades to come, unless energetic efforts are made to tackle these problems. More data collected through standardized methodologies, using internationally accepted (ICD-10) definitions, are needed. Data on the incidence of blindness due to common causes would be useful for calculating future trends more precisely. PMID:7704921

  16. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  17. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  18. The deconvolution of Doppler-broadened positron annihilation measurements using fast Fourier transforms and power spectral analysis

    International Nuclear Information System (INIS)

    Schaffer, J.P.; Shaughnessy, E.J.; Jones, P.L.

    1984-01-01

    A deconvolution procedure which corrects Doppler-broadened positron annihilation spectra for instrument resolution is described. The method employs fast Fourier transforms, is model independent, and does not require iteration. The mathematical difficulties associated with the incorrectly posed first order Fredholm integral equation are overcome by using power spectral analysis to select a limited number of low frequency Fourier coefficients. The FFT/power spectrum method is then demonstrated for an irradiated high purity single crystal sapphire sample. (orig.)

  19. PHYSIOTHERAPY OF BLIND AND LOW VISION INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Alenka Tatjana Sterle

    2002-12-01

    Full Text Available Background. The authors present a preventive physiotherapy programme intended to improve the well-being of persons who have been blind or visually impaired since birth or experience partial or complete loss of vision later in life as a result of injury or disease.Methods. Different methods and techniques of physiotherapy, kinesitherapy and relaxation used in the rehabilitation of visually impaired persons are described.Results. The goals of timely physical treatment are to avoid unnecessary problems, such as improper posture, tension of the entire body, face and eyes, and deterioration of facial expression, that often accompany partial or complete loss of vision. Regular training improves functional skills, restores the skills that have been lost, and prevents the development of defects and consequent disorders of the locomotor apparatus.Conclusions. It is very difficult to change the life style and habits of blind and visually imapired persons. Especially elderly people who experience complete or partial loss of vision later in their lives are often left to their fate. Therefore blind and visually impaired persons of all age groups should be enrolled in a suitable rehabilitation programme that will improve the quality of their life.

  20. Comparison of two modalities: a novel technique, 'chromohysteroscopy', and blind endometrial sampling for the evaluation of abnormal uterine bleeding.

    Science.gov (United States)

    Alay, Asli; Usta, Taner A; Ozay, Pinar; Karadugan, Ozgur; Ates, Ugur

    2014-05-01

    The objective of this study was to compare classical blind endometrial tissue sampling with hysteroscopic biopsy sampling following methylene blue dyeing in premenopausal and postmenopausal patients with abnormal uterine bleeding. A prospective case-control study was carried out in the Office Hysteroscopy Unit. Fifty-four patients with complaints of abnormal uterine bleeding were evaluated. Data of 38 patients were included in the statistical analysis. Three groups were compared by examining samples obtained through hysteroscopic biopsy before and after methylene blue dyeing, and classical blind endometrial tissue sampling. First, uterine cavity was evaluated with office hysteroscopy. Methylene blue dye was administered through the hysteroscopic inlet. Tissue samples were obtained from stained and non-stained areas. Blind endometrial sampling was performed in the same patients immediately after the hysteroscopy procedure. The results of hysteroscopic biopsy from methylene blue stained and non-stained areas and blind biopsy were compared. No statistically significant differences were determined in the comparison of biopsy samples obtained from methylene-blue stained, non-stained areas and blind biopsy (P > 0.05). We suggest that chromohysteroscopy is not superior to endometrial sampling in cases of abnormal uterine bleeding. Further studies with greater sample sizes should be performed to assess the validity of routine use of endometrial dyeing. © 2014 The Authors. Journal of Obstetrics and Gynaecology Research © 2014 Japan Society of Obstetrics and Gynecology.

  1. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  2. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  3. Self-noise suppression schemes in blind image steganography

    Science.gov (United States)

    Ramkumar, Mahalingam; Akansu, Ali N.

    1999-11-01

    Blind or oblivious data hiding, can be considered as a signaling method where the origin of the signal constellation is not known. The origin however, can be estimated, by means of self-noise suppression techniques. In this paper, we propose such a technique, and present both theoretical and numerical evaluations of its performance in an additive noise scenario. The problem of optimal choice of the parameters of the proposed technique is also explored, and solutions are presented. Though the cover object is assumed to be an image for purposes of illustration, the proposed method is equally applicable for other types of multimedia data, like video, speech or music.

  4. Mid-Face Volumization With Hyaluronic Acid: Injection Technique and Safety Aspects from a Controlled, Randomized, Double-Blind Clinical Study.

    Science.gov (United States)

    Prager, Welf; Agsten, Karla; Kravtsov, Maria; Kerscher, Prof Martina

    2017-04-01

    BACKGROUND: Injection of hyaluronic acid (HA) volumizing fillers in the malar area is intended for rejuvenation of the mid-face. The choice of products, depth, and technique of injection depends on the desired level of volume enhancement and practitioners' preferences. OBJECTIVE: To describe a volumizing injection technique in the scope of a controlled, randomized, double-blind, single-center, split-face clinical study. A total of 45 subjects with bilateral symmetrical moderate to severe volume loss in the malar area received a single 2 mL injection of CPM®-26 (Cohesive Polydensified Matrix®) on one side and VYC®-20 (VYCROSS®) on the contralateral side of the face. The same injection technique was applied for both sides of the face. Use of anesthetics, overcorrection, and touch-ups were not permitted. The investigator completed a product satisfaction questionnaire. Adverse events (AE) and injection-site reactions (ISRs) were reported during the study. RESULTS: The products were placed at the epiperiosteal depth in 88.9% (n=40), at the subdermal depth in 8.9% (n=4) and at both levels in 2.2% (n=1) of subjects. Fanning technique using cannulae was applied in most cases (97.8%, n=44). Results of the investigator satisfaction questionnaire allowed to characterize CPM-26 in comparison to other volumizing gels. Both study products were generally well tolerated. Local reactions were transient and of mild to moderate intensity, with the most frequent ones being redness, pain, and swelling. CONCLUSION: Adequate injection technique in volumizing treatments is essential to create a natural aesthetic rejuvenation while respecting the safety aspect of the procedures. A 22G blunt cannula used with CPM-26 was preferred due to an easier and a more homogeneous distribution of the product. The investigator also appreciated CPM-26 for its ease of injection, positioning, lifting, and volumizing capacity. J Drugs Dermatol. 2017;16(4):351-357..

  5. Laser induced fluorescence technique for detecting organic matter in East China Sea

    Science.gov (United States)

    Chen, Peng; Wang, Tianyu; Pan, Delu; Huang, Haiqing

    2017-10-01

    A laser induced fluorescence (LIF) technique for fast diagnosing chromophoric dissolved organic matter (CDOM) in water is discussed. We have developed a new field-portable laser fluorometer for rapid fluorescence measurements. In addtion, the fluorescence spectral characteristics of fluorescent constituents (e.g., CDOM, chlorophyll-a) were analyzed with a spectral deconvolution method of bi-Gaussian peak function. In situ measurements by the LIF technique compared well with values measured by conventional spectrophotometer method in laboratory. A significant correlation (R2 = 0.93) was observed between fluorescence by the technique and absorption by laboratory spectrophotometer. Influence of temperature variation on LIF measurement was investigated in lab and a temperature coefficient was deduced for fluorescence correction. Distributions of CDOM fluorescence measured using this technique in the East China Sea coast were presented. The in situ result demonstrated the utility of the LIF technique for rapid detecting dissolved organic matter.

  6. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Blindness and severe visual impairment in pupils at schools for the blind in Burundi.

    Science.gov (United States)

    Ruhagaze, Patrick; Njuguna, Kahaki Kimani Margaret; Kandeke, Lévi; Courtright, Paul

    2013-01-01

    To determine the causes of childhood blindness and severe visual impairment in pupils attending schools for the blind in Burundi in order to assist planning for services in the country. All pupils attending three schools for the blind in Burundi were examined. A modified WHO/PBL eye examination record form for children with blindness and low vision was used to record the findings. Data was analyzed for those who became blind or severely visually impaired before the age of 16 years. Overall, 117 pupils who became visually impaired before 16 years of age were examined. Of these, 109 (93.2%) were blind or severely visually impaired. The major anatomical cause of blindness or severe visual impairment was cornea pathology/phthisis (23.9%), followed by lens pathology (18.3%), uveal lesions (14.7%) and optic nerve lesions (11.9%). In the majority of pupils with blindness or severe visual impairment, the underlying etiology of visual loss was unknown (74.3%). More than half of the pupils with lens related blindness had not had surgery; among those who had surgery, outcomes were generally poor. The causes identified indicate the importance of continuing preventive public health strategies, as well as the development of specialist pediatric ophthalmic services in the management of childhood blindness in Burundi. The geographic distribution of pupils at the schools for the blind indicates a need for community-based programs to identify and refer children in need of services.

  8. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  9. The Sokoto blind beggars: causes of blindness and barriers to rehabilitation services.

    Science.gov (United States)

    Balarabe, Aliyu Hamza; Mahmoud, Abdulraheem O; Ayanniyi, Abdulkabir Ayansiji

    2014-01-01

    To determine the causes of blindness and the barriers to accessing rehabilitation services (RS) among blind street beggars (bsb) in Sokoto, Nigeria. A cross-sectional survey of 202 bsb (VA blindness were diagnosed by clinical ophthalmic examination. There were 107 (53%) males and 95 (47%) females with a mean age of 49 years (SD 12.2). Most bsb 191 (94.6%) had non-formal education. Of 190 (94.1%) irreversibly bsb, 180/190 (94.7%) had no light perception (NPL) bilaterally. The major causes of blindness were non-trachomatous corneal opacity (60.8%) and trachoma corneal opacity (12.8%). There were 166 (82%) blind from avoidable causes and 190 (94.1%) were irreversibly blind with 76.1% due to avoidable causes. The available sub-standard RS were educational, vocational and financial support. The barriers to RS in the past included non-availability 151 (87.8%), inability to afford 2 (1.2%), unfelt need 4 (2.3%), family refusal 1 (0.6), ignorance 6 (3.5%) and being not linked 8 (4.7%). The barriers to RS during the study period included inability of 72 subjects (35.6%) to access RS and 59 (81.9%) were due to lack of linkage to the existing services. Corneal opacification was the major cause of blindness among bsb. The main challenges to RS include the inadequate services available, societal and users factors. Renewed efforts are warranted toward the prevention of avoidable causes of blindness especially corneal opacities. The quality of life of the blind street beggar should be improved through available, accessible and affordable well-maintained and sustained rehabilitation services.

  10. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  11. Postural control in blind subjects.

    Science.gov (United States)

    Soares, Antonio Vinicius; Oliveira, Cláudia Silva Remor de; Knabben, Rodrigo José; Domenech, Susana Cristina; Borges Junior, Noe Gomes

    2011-12-01

    To analyze postural control in acquired and congenitally blind adults. A total of 40 visually impaired adults participated in the research, divided into 2 groups, 20 with acquired blindness and 20 with congenital blindness - 21 males and 19 females, mean age 35.8 ± 10.8. The Brazilian version of Berg Balance Scale and the motor domain of functional independence measure were utilized. On Berg Balance Scale the mean for acquired blindness was 54.0 ± 2.4 and 54.4 ± 2.5 for congenitally blind subjects; on functional independence measure the mean for acquired blind group was 87.1 ± 4.8 and 87.3 ± 2.3 for congenitally blind group. Based upon the scale used the results suggest the ability to control posture can be developed by compensatory mechanisms and it is not affected by visual loss in congenitally and acquired blindness.

  12. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    Directory of Open Access Journals (Sweden)

    Roerdink Jos BTM

    2008-04-01

    Full Text Available Abstract Background We present a simple, data-driven method to extract haemodynamic response functions (HRF from functional magnetic resonance imaging (fMRI time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs. Results ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling. The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series. Conclusion With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.

  13. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    Science.gov (United States)

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  15. Postural control in blind subjects

    Directory of Open Access Journals (Sweden)

    Antonio Vinicius Soares

    2011-12-01

    Full Text Available Objective: To analyze postural control in acquired and congenitally blind adults. Methods: A total of 40 visually impaired adults participated in the research, divided into 2 groups, 20 with acquired blindness and 20 with congenital blindness - 21 males and 19 females, mean age 35.8 ± 10.8. The Brazilian version of Berg Balance Scale and the motor domain of functional independence measure were utilized. Results: On Berg Balance Scale the mean for acquired blindness was 54.0 ± 2.4 and 54.4 ± 2.5 for congenitally blind subjects; on functional independence measure the mean for acquired blind group was 87.1 ± 4.8 and 87.3 ± 2.3 for congenitally blind group. Conclusion: Based upon the scale used the results suggest the ability to control posture can be developed by compensatory mechanisms and it is not affected by visual loss in congenitally and acquired blindness.

  16. Modeling web-based information seeking by users who are blind.

    Science.gov (United States)

    Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti

    2011-01-01

    This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.

  17. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp; Hassibi, Babak

    2017-01-01

    that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first

  18. An enhanced Hilbert–Huang transform technique for bearing condition monitoring

    International Nuclear Information System (INIS)

    Osman, Shazali; Wang, Wilson

    2013-01-01

    A new technique, enhanced Hilbert–Huang transform (eHHT), is proposed in this work for fault detection in rolling element bearings. It includes two processes: firstly, the collected vibration signal is denoised to highlight defect-related impulses; and secondly the denoised signal is further processed by the use of the proposed eHHT technique to identify the defect features for bearing fault detection. Signal denoising is carried out by the use of the minimum entropy deconvolution filter to reduce impedance effect of the transmission path of the measured signal. In the proposed eHHT, a novel strategy is proposed to enhance feature extraction based on the analysis of correlation and mutual information. The effectiveness of the proposed eHHT technique in feature extraction and analysis is verified by a series of experimental tests corresponding to different bearing conditions. Its robustness is examined by using data sets from a different resource. (paper)

  19. Causes and emerging trends of childhood blindness: findings from schools for the blind in Southeast Nigeria.

    Science.gov (United States)

    Aghaji, Ada; Okoye, Obiekwe; Bowman, Richard

    2015-06-01

    To ascertain the causes severe visual impairment and blindness (SVI/BL) in schools for the blind in southeast Nigeria and to evaluate temporal trends. All children who developed blindness at schools for the blind in southeast Nigeria were examined. All the data were recorded on a WHO/Prevention of Blindness (WHO/PBL) form entered into a Microsoft Access database and transferred to STATA V.12.1 for analysis. To estimate temporal trends in causes of blindness, older (>15 years) children were compared with younger (≤15 years) children. 124 children were identified with SVI/BL. The most common anatomical site of blindness was the lens (33.9%). Overall, avoidable blindness accounted for 73.4% of all blindness. Exploring trends in SVI/BL between children ≤15 years of age and those >15 years old, this study shows a reduction in avoidable blindness but an increase in cortical visual impairment in the younger age group. The results from this study show a statistically significant decrease in avoidable blindness in children ≤15 years old. Corneal blindness appears to be decreasing but cortical visual impairment seems to be emerging in the younger age group. Appropriate strategies for the prevention of avoidable childhood blindness in Nigeria need to be developed and implemented. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed

    2016-02-26

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation in conjunction with multihop communications is advocated herewith. Blind cooperation however is actually shown to be inefficient unless power control is applied. Inefficiency in this paper is projected in terms of the transport rate normalized to energy consumption. To that end, an uncoordinated power control mechanism is proposed whereby each device in a blind cooperative cluster randomly adjusts its transmit power level. An upper bound is derived for the mean transmit power that must be observed at each device. Finally, the uncoordinated power control mechanism is demonstrated to consistently outperform the simple point-to-point routing case. © 2015 IEEE.

  1. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  2. 1975 Memorial Award Paper. Image generation and display techniques for CT scan data. Thin transverse and reconstructed coronal and sagittal planes.

    Science.gov (United States)

    Glenn, W V; Johnston, R J; Morton, P E; Dwyer, S J

    1975-01-01

    The various limitations to computerized axial tomographic (CT) interpretation are due in part to the 8-13 mm standard tissue plane thickness and in part to the absence of alternative planes of view, such as coronal or sagittal images. This paper describes a method for gathering multiple overlapped 8 mm transverse sections, subjecting these data to a deconvolution process, and then displaying thin (1 mm) transverse as well as reconstructed coronal and sagittal CT images. Verification of the deconvolution technique with phantom experiments is described. Application of the phantom results to human post mortem CT scan data illustrates this method's faithful reconstruction of coronal and sagittal tissue densities when correlated with actual specimen photographs of a sectioned brain. A special CT procedure, limited basal overlap scanning, is proposed for use on current first generation CT scanners without hardware modification.

  3. An energy kurtosis demodulation technique for signal denoising and bearing fault detection

    International Nuclear Information System (INIS)

    Wang, Wilson; Lee, Hewen

    2013-01-01

    Rolling element bearings are commonly used in rotary machinery. Reliable bearing fault detection techniques are very useful in industries for predictive maintenance operations. Bearing fault detection still remains a very challenging task especially when defects occur on rotating bearing components because the fault-related features are non-stationary in nature. In this work, an energy kurtosis demodulation (EKD) technique is proposed for bearing fault detection especially for non-stationary signature analysis. The proposed EKD technique firstly denoises the signal by using a maximum kurtosis deconvolution filter to counteract the effect of signal transmission path so as to highlight defect-associated impulses. Next, the denoised signal is modulated over several frequency bands; a novel signature integration strategy is proposed to enhance feature characteristics. The effectiveness of the proposed EKD fault detection technique is verified by a series of experimental tests corresponding to different bearing conditions. (paper)

  4. Fair quantum blind signatures

    International Nuclear Information System (INIS)

    Tian-Yin, Wang; Qiao-Yan, Wen

    2010-01-01

    We present a new fair blind signature scheme based on the fundamental properties of quantum mechanics. In addition, we analyse the security of this scheme, and show that it is not possible to forge valid blind signatures. Moreover, comparisons between this scheme and public key blind signature schemes are also discussed. (general)

  5. [Visual impairment and blindness in children in a Malawian school for the blind].

    Science.gov (United States)

    Schulze Schwering, M; Nyrenda, M; Spitzer, M S; Kalua, K

    2013-08-01

    The aim of this study was to determine the anatomic sites of severe visual impairment and blindness in children in an integrated school for the blind in Malawi, and to compare the results with those of previous Malawian blind school studies. Children attending an integrated school for the blind in Malawi were examined in September 2011 using the standard WHO/PBL eye examination record for children with blindness and low vision. Visual acuity [VA] of the better eye was classified using the standardised WHO reporting form. Fifty-five pupils aged 6 to 19 years were examined, 39 (71 %) males, and 16 (29 %) females. Thirty eight (69%) were blind [BL], 8 (15 %) were severely visually impaired [SVI], 8 (15 %) visually impaired [VI], and 1 (1.8 %) was not visually impaired [NVI]. The major anatomic sites of visual loss were optic nerve (16 %) and retina (16 %), followed by lens/cataract (15 %), cornea (11 %) and lesions of the whole globe (11 %), uveal pathologies (6 %) and cortical blindness (2 %). The exact aetiology of VI or BL could not be determined in most children. Albinism accounted for 13 % (7/55) of the visual impairments. 24 % of the cases were considered to be potentially avoidable: refractive amblyopia among pseudophakic patients and corneal scaring. Optic atrophy, retinal diseases (mostly albinism) and cataracts were the major causes of severe visual impairment and blindness in children in an integrated school for the blind in Malawi. Corneal scarring was now the fourth cause of visual impairment, compared to being the commonest cause 35 years ago. Congenital cataract and its postoperative outcome were the commonest remedial causes of visual impairment. Georg Thieme Verlag KG Stuttgart · New York.

  6. Using Commercially Available Techniques to Make Organic Chemistry Representations Tactile and More Accessible to Students with Blindness or Low Vision

    Science.gov (United States)

    Supalo, Cary A.; Kennedy, Sean H.

    2014-01-01

    Organic chemistry courses can present major obstacles to access for students with blindness or low vision (BLV). In recent years, efforts have been made to represent organic chemistry concepts in tactile forms for blind students. These methodologies are described in this manuscript. Further work being done at Illinois State University is also…

  7. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    Science.gov (United States)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source

  8. Prevalence of blindness and diabetic retinopathy in northern Jordan.

    Science.gov (United States)

    Rabiu, Mansur M; Al Bdour, Muawyah D; Abu Ameerh, Mohammed A; Jadoon, Muhammed Z

    2015-01-01

    To estimate the prevalence of blindness, visual impairment, diabetes, and diabetic retinopathy in north Jordan (Irbid) using the rapid assessment of avoidable blindness and diabetic retinopathy methodology. A multistage cluster random sampling technique was used to select participants for this survey. A total of 108 clusters were selected using probability proportional to size method while subjects within the clusters were selected using compact segment method. Survey teams moved from house to house in selected segments examining residents 50 years and older until 35 participants were recruited. All eligible people underwent a standardized examination protocol, which included ophthalmic examination and random blood sugar test using digital glucometers (Accu-Chek) in their homes. Diabetic retinopathy among diabetic patients was assessed through dilated fundus examination. A total of 3638 out of the 3780 eligible participants were examined. Age- and sex-adjusted prevalence of blindness, severe visual impairment, and visual impairment with available correction were 1.33% (95% confidence interval [CI] 0.87-1.73), 1.82% (95% CI 1.35-2.25), and 9.49% (95% CI 8.26-10.74), respectively, all higher in women. Untreated cataract and diabetic retinopathy were the major causes of blindness, accounting for 46.7% and 33.2% of total blindness cases, respectively. Glaucoma was the third major cause, accounting for 8.9% of cases. The prevalence of diabetes mellitus was 28.6% (95% CI 26.9-30.3) among the study population and higher in women. The prevalence of any retinopathy among diabetic patients was 48.4%. Cataract and diabetic retinopathy are the 2 major causes of blindness and visual impairment in northern Jordan. For both conditions, women are primarily affected, suggesting possible limitations to access to services. A diabetic retinopathy screening program needs to proactively create sex-sensitive awareness and provide easily accessible screening services with prompt treatment.

  9. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Science.gov (United States)

    Vashist, Praveen; Senjam, Suraj Singh; Gupta, Vivek; Gupta, Noopur; Kumar, Atul

    2017-02-01

    A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB), Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO) IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  10. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Directory of Open Access Journals (Sweden)

    Praveen Vashist

    2017-01-01

    Full Text Available A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB, Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  11. Color blindness defect and medical laboratory technologists: unnoticed problems and the care for screening.

    Science.gov (United States)

    Dargahi, Hossein; Einollahi, Nahid; Dashti, Nasrin

    2010-01-01

    Color-blindness is the inability to perceive differences between some color that other people can distinguish. Using a literature search, the results indicate the prevalence of color vision deficiency in the medical profession and its on medical skills. Medical laboratory technicians and technologists employees should also screen for color blindness. This research aimed to study color blindness prevalence among Hospitals' Clinical Laboratories' Employees and Students in Tehran University of Medical Sciences (TUMS). A cross-sectional descriptive and analytical study was conducted among 633 TUMS Clinical Laboratory Sciences' Students and Hospitals' Clinical Laboratories' Employees to detect color-blindness problems by Ishihara Test. The tests were first screened with certain pictures, then compared to the Ishihara criteria to be possible color defective were tested further with other plates to determine color - blindness defects. The data was saved using with SPSS software and analyzed by statistical methods. This is the first study to determine the prevalence of color - blindness in Clinical Laboratory Sciences' Students and Employees. 2.4% of TUMS Medical Laboratory Sciences Students and Hospitals' Clinical Laboratories' Employees are color-blind. There is significant correlation between color-blindness and sex and age. But the results showed that there is not significant correlation between color-blindness defect and exposure to chemical agents, type of job, trauma and surgery history, history of familial defect and race. It would be a wide range of difficulties by color blinded students and employees in their practice of laboratory diagnosis and techniques with a potentially of errors. We suggest color blindness as a medical conditions should restrict employment choices for medical laboratory technicians and technologists job in Iran.

  12. Evaluation of obstructive uropathy by deconvolution analysis of {sup 99m}Tc-mercaptoacetyltriglycine ({sup 99m}Tc-MAG3) renal scintigraphic data. A comparison with diuresis renography

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Yoshiyuki [Mie Univ., Tsu (Japan). School of Medicine

    1997-06-01

    Clinical significance of ERPF (effective renal plasma flow) and MTT (mean transit time) calculated by deconvolution analysis was studied in patients with obstructive uropathy. Subjects were 84 kidneys of 38 patients and 4 people without renal abnormality (22 males and 20 females) whose age was 53.8 y in a mean. Scintigraphy was done with a Toshiba {gamma}-camera GCA-7200A equipped with a low energy-high resolution collimator with the energy width of 149 keV{+-}20% at 20 min after loading of 500 ml of water and rapidly after intravenous administration of {sup 99m}Tc-MAG3 (200 MBq). At 5 min later, blood was collected and at 10 min, furosemide was intravenously given. Plasma radioactivity was measured in a well-type scintillation counter and was used for correction of blood concentration-time curve obtained from heart area data. Split MTT, regional MTT and ERPF were calculated by deconvolution analysis. Impaired transit was judged from renogram after furosemide loading and was classified into 6 types. ERPF was found lowered in cases of obstruction and in low renal function. Regional MTT was prolonged only in the former cases. The examination with the deconvolution analysis was concluded to be widely used since it gave useful information for the treatment. (K.H.)

  13. BlindSense: An Accessibility-inclusive Universal User Interface for Blind People

    Directory of Open Access Journals (Sweden)

    A. Khan

    2018-04-01

    Full Text Available A large number of blind people use smartphone-based assistive technology to perform their common activities. In order to provide a better user experience the existing user interface paradigm needs to be revisited. A new user interface model has been proposed in this paper. A simplified, semantically consistent, and blind-friendly adaptive user interface is provided. The proposed solution is evaluated through an empirical study on 63 blind people leveraging an improved user experience in performing common activities on a smartphone.

  14. Causes of blindness and career choice among pupils in a blind ...

    African Journals Online (AJOL)

    available eye care services Furthermore there is need for career talk in schools for the blind to ... career where their potential can be fully maximized. .... tropicamide 1% eye drops. .... Foster A, Gilbert C. Epidemiology of childhood blindness.

  15. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  16. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  17. Real-time adaptive concepts in acoustics blind signal separation and multichannel echo cancellation

    CERN Document Server

    Schobben, Daniel W E

    2001-01-01

    Blind Signal Separation (BSS) deals with recovering (filtered versions of) source signals from an observed mixture thereof. The term `blind' relates to the fact that there are no reference signals for the source signals and also that the mixing system is unknown. This book presents a new method for blind signal separation, which is developed to work on microphone signals. Acoustic Echo Cancellation (AEC) is a well-known technique to suppress the echo that a microphone picks up from a loudspeaker in the same room. Such acoustic feedback occurs for example in hands-free telephony and can lead to a perceived loud tone. For an application such as a voice-controlled television, a stereo AEC is required to suppress the contribution of the stereo loudspeaker setup. A generalized AEC is presented that is suited for multi-channel operation. New algorithms for Blind Signal Separation and multi-channel Acoustic Echo Cancellation are presented. A background is given in array signal processing methods, adaptive filter the...

  18. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  19. Models for the blind

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2014-01-01

    person to touch them in their historical context. And yet these objects are all about touch, from the concrete act of touching something to the norms that assigned touch a specific pedagogical role in nineteenth-century blind schools. The aim of this article is twofold. First, I provide a historical......When displayed in museum cabinets, tactile objects that were once used in the education of blind and visually impaired people, appear to us, sighted visitors, as anything but tactile. We cannot touch them due to museum policies and we can hardly imagine what it would have been like for a blind...... background to the tactile objects of the blind. When did they appear as a specific category of pedagogical aid and how did they help determine the relation between blindness, vision, and touch? Second, I address the tactile objects from the point of view of empirical sources and historical evidence. Material...

  20. Adaptive DSP Algorithms for UMTS: Blind Adaptive MMSE and PIC Multiuser Detection

    NARCIS (Netherlands)

    Potman, J.

    2003-01-01

    A study of the application of blind adaptive Minimum Mean Square Error (MMSE) and Parallel Interference Cancellation (PIC) multiuser detection techniques to Wideband Code Division Multiple Access (WCDMA), the physical layer of Universal Mobile Telecommunication System (UMTS), has been performed as

  1. Višekanalna slijepa dekonvolucija slike zasnovana na inovacijama

    OpenAIRE

    Kopriva, Ivica; Seršić, Damir

    2011-01-01

    The linear mixture model (LMM) has recently been used for multi-channel representation of a blurred image. This enables use of multivariate data analysis methods such as independent component analysis (ICA) to solve blind image deconvolution as an instantaneous blind source separation (BSS) requiring no a priori knowledge about the size and origin of the blurring kernel. However, there remains a serious weakness of this approach: statistical dependence between hidden variables in the LMM. The...

  2. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  3. ICF implosion hotspot ion temperature diagnostic techniques based on neutron time-of-flight method

    International Nuclear Information System (INIS)

    Tang Qi; Song Zifeng; Chen Jiabin; Zhan Xiayu

    2013-01-01

    Ion temperature of implosion hotspot is a very important parameter for inertial confinement fusion. It reflects the energy level of the hotspot, and it is very sensitive to implosion symmetry and implosion speed. ICF implosion hotspot ion temperature diagnostic techniques based on neutron time-of-flight method were described. A neutron TOF spectrometer was developed using a ultrafast plastic scintillator as the neutron detector. Time response of the spectrometer has 1.1 ns FWHM and 0.5 ns rising time. TOF spectrum resolving method based on deconvolution and low pass filter was illuminated. Implosion hotspot ion temperature in low neutron yield and low ion temperature condition at Shenguang-Ⅲ facility was acquired using the diagnostic techniques. (authors)

  4. Blinded trials taken to the test

    DEFF Research Database (Denmark)

    Hróbjartsson, A; Forfang, E; Haahr, M T

    2007-01-01

    Blinding can reduce bias in randomized clinical trials, but blinding procedures may be unsuccessful. Our aim was to assess how often randomized clinical trials test the success of blinding, the methods involved and how often blinding is reported as being successful....

  5. How the blind "see" Braille: lessons from functional magnetic resonance imaging.

    Science.gov (United States)

    Sadato, Norihiro

    2005-12-01

    What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.

  6. Postictal blindness in adults.

    OpenAIRE

    Sadeh, M; Goldhammer, Y; Kuritsky, A

    1983-01-01

    Cortical blindness following grand mal seizures occurred in five adult patients. The causes of seizures included idiopathic epilepsy, vascular accident, brain cyst, acute encephalitis and chronic encephalitis. Blindness was permanent in one patients, but the others recovered within several days. Since most of the patients were either unaware of or denied their blindness, it is possible that this event often goes unrecognised. Cerebral hypoxia is considered the most likely mechanism.

  7. INTRODUCTION Childhood blindness is increasingly becoming a ...

    African Journals Online (AJOL)

    number of blind years resulting from blindness in children is also equal to the number of blind years due to age related cataract.10 The burden of disability in terms of blind years in these children represents a major. CAUSES OF BLINDNESS AND VISUAL IMPAIRMENT AT THE SCHOOL FOR THE. BLIND OWO, NIGERIA.

  8. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  9. Liquid argon TPC signal formation, signal processing and reconstruction techniques

    Science.gov (United States)

    Baller, B.

    2017-07-01

    This document describes a reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions benefits from the knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise. A unique clustering algorithm reconstructs line-like trajectories and vertices in two dimensions which are then matched to create of 3D objects. These techniques and algorithms are available to all experiments that use the LArSoft suite of software.

  10. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    Science.gov (United States)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  11. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    Science.gov (United States)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  12. Childhood Fears among Children Who Are Blind: The Perspective of Teachers Who Are Blind

    Science.gov (United States)

    Al-Zboon, Eman

    2017-01-01

    The aim of this study was to investigate childhood fears in children who are blind from the perspective of teachers who are blind. The study was conducted in Jordan. Forty-six teachers were interviewed. Results revealed that the main fear content in children who are blind includes fear of the unknown; environment-, transportation- and…

  13. Postoperative endodontic pain of three different instrumentation techniques in asymptomatic necrotic mandibular molars with periapical lesion: a prospective, randomized, double-blind clinical trial.

    Science.gov (United States)

    Shokraneh, Ali; Ajami, Majid; Farhadi, Nastaran; Hosseini, Mohsen; Rohani, Bita

    2017-01-01

    The purpose of this prospective, randomized, double-blind study was to compare postoperative pain of root canal treatment in patients with asymptomatic mandibular molar teeth with necrotic pulp and periapical lesion using three different instrumentation techniques: hand, multi-file rotary (ProTaper Universal), and reciprocating single-file (Wave-One) instrumentation techniques. Ninety-six patients who fulfilled specific inclusion criteria were assigned to three groups according to the root canal instrumentation technique used: Hand (G1), ProTaper Universal (G2), and Wave-One (G3). One-visit root canal treatment was carried out, and the severity of the postoperative pain was assessed by the Heft-Parker visual analogue scale 6, 12, 18, 24, 48, and 72 h after treatment. Data were analyzed by Kruskal-Wallis, χ 2 , Cochrane Q, one-way ANOVA, and Spearman's correlation analyses (α = 0.05). The patients in group 3 reported significantly lower postoperative pain levels at 6, 12, and 18 h compared with the patients in the two other groups (P  .05). The analgesic consumption was significantly higher in group 1 (P  .05). Postoperative pain was significantly lower in patients undergoing root canal instrumentation with the Wave-One file compared with the ProTaper Universal and hand files.

  14. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    OpenAIRE

    Apostolos Meliones; Demetrios Sampson

    2018-01-01

    Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils), which has primarily addressed blind or visually impaired (BVI) accessibility and self-guided tours in museums. A pilot prototype has been developed and is curr...

  15. Optimized blind gamma-ray pulsar searches at fixed computing budget

    International Nuclear Information System (INIS)

    Pletsch, Holger J.; Clark, Colin J.

    2014-01-01

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fully coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.

  16. Causes of blindness and career choice among pupils in a blind school; South Western Nigeria.

    Science.gov (United States)

    Fadamiro, Christianah Olufunmilayo

    2014-01-01

    The causes of Blindness vary from place to place with about 80% of it been avoidable. Furthermore Blind people face a lot of challenges in career choice thus limiting their economic potential and full integration into the society. This study aims at identifying the causes of blindness and career choice among pupils in a school for the blind in South -Western Nigeria. This is a descriptive study of causes of blindness and career choice among 38 pupils residing in a school for the blind at Ikere -Ekiti, South Western Nigeria. Thirty eight pupils comprising of 25 males (65.8%) and 13 females (34.2%) with age range from 6-39 years were seen for the study, The commonest cause of blindness was cataract with 14 cases (36.84%) while congenital glaucoma and infection had an equal proportion of 5 cases each (13.16%). Avoidable causes constituted the greatest proportion of the causes 27 (71.05%) while unavoidable causes accounted for 11 (28.9%). The law career was the most desired profession by the pupils 11 (33.3%) followed by Teaching 9 (27.3%), other desired profession includes engineering, journalism and farming. The greatest proportion of causes of blindness identified in this study is avoidable. There is the need to create public awareness on some of the notable causes particularly cataract and motivate the community to utilize available eye care services Furthermore there is need for career talk in schools for the blind to enable them choose career where their potential can be fully maximized.

  17. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    Science.gov (United States)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  18. A survey of visual impairment and blindness in children attending seven schools for the blind in Myanmar.

    Science.gov (United States)

    Muecke, James; Hammerton, Michael; Aung, Yee Yee; Warrier, Sunil; Kong, Aimee; Morse, Anna; Holmes, Martin; Yapp, Michael; Hamilton, Carolyn; Selva, Dinesh

    2009-01-01

    To determine the causes of visual impairment and blindness amongst children in schools for the blind in Myanmar; to identify the avoidable causes of visual impairment and blindness; and to provide spectacles, low vision aids, orientation and mobility training and ophthalmic treatment where indicated. Two hundred and eight children under 16 years of age from all 7 schools for the blind in Myanmar were examined and the data entered into the World Health Organization Prevention of Blindness Examination Record for Childhood Blindness (WHO/PBL ERCB). One hundred and ninety nine children (95.7%) were blind (BL = Visual Acuity [VA] schools for the blind in Myanmar had potentially avoidable causes of SVI/BL. With measles being both the commonest identifiable and commonest avoidable cause, the data supports the need for a measles immunization campaign. There is also a need for a dedicated pediatric eye care center with regular ophthalmology visits to the schools, and improved optometric, low vision and orientation and mobility services in Myanmar.

  19. Overcomplete Blind Source Separation by Combining ICA and Binary Time-Frequency Masking

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2005-01-01

    a novel method for over-complete blind source separation. Two powerful source separation techniques have been combined, independent component analysis and binary time-frequency masking. Hereby, it is possible to iteratively extract each speech signal from the mixture. By using merely two microphones we...

  20. Blindness and visual impairment in opera.

    Science.gov (United States)

    Aydin, Pinar; Ritch, Robert; O'Dwyer, John

    2018-01-01

    The performing arts mirror the human condition. This study sought to analyze the reasons for inclusion of visually impaired characters in opera, the cause of the blindness or near blindness, and the dramatic purpose of the blindness in the storyline. We reviewed operas from the 18 th century to 2010 and included all characters with ocular problems. We classified the cause of each character's ocular problem (organic, nonorganic, and other) in relation to the thematic setting of the opera: biblical and mythical, blind beggars or blind musicians, historical (real or fictional characters), and contemporary or futuristic. Cases of blindness in 55 characters (2 as a choir) from 38 operas were detected over 3 centuries of repertoire: 11 had trauma-related visual impairment, 5 had congenital blindness, 18 had visual impairment of unknown cause, 9 had psychogenic or malingering blindness, and 12 were symbolic or miracle-related. One opera featured an ophthalmologist curing a patient. The research illustrates that visual impairment was frequently used as an artistic device to enhance the intent and situate an opera in its time.

  1. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    Science.gov (United States)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  2. "Color-Blind" Racism.

    Science.gov (United States)

    Carr, Leslie G.

    Examining race relations in the United States from a historical perspective, this book explains how the constitution is racist and how color blindness is actually a racist ideology. It is argued that Justice Harlan, in his dissenting opinion in Plessy v. Ferguson, meant that the constitution and the law must remain blind to the existence of race…

  3. 42 CFR 436.531 - Determination of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Determination of blindness. 436.531 Section 436.531... Requirements for Medicaid Eligibility Blindness § 436.531 Determination of blindness. In determining blindness... determine on behalf of the agency— (1) Whether the individual meets the definition of blindness; and (2...

  4. Depth from Optical Turbulence

    Science.gov (United States)

    2012-01-01

    Dagobert, and C. Franchis . Atmospheric tur- bulence restoration by diffeomorphic image registration and blind deconvolution. In ACIVS, 2008. 1 [4] S...20] V. Tatarskii. Wave Propagation in a Turbulent Medium. McGraw-Hill Books, 1961. 2 [21] Y. Tian and S. Narasimhan. A globally optimal data-driven

  5. Is love blind? Sexual behavior and psychological adjustment of adolescents with blindness

    NARCIS (Netherlands)

    Kef, S.; Bos, H.

    2006-01-01

    In the present study, we examined sexual knowledge, sexual behavior, and psychological adjustment of adolescents with blindness. The sample included 36 Dutch adolescents who are blind, 16 males and 20 females. Results of the interviews revealed no problems regarding sexual knowledge or psychological

  6. Poverty and Blindness in Nigeria: Results from the National Survey of Blindness and Visual Impairment.

    Science.gov (United States)

    Tafida, A; Kyari, F; Abdull, M M; Sivasubramaniam, S; Murthy, G V S; Kana, I; Gilbert, Clare E

    2015-01-01

    Poverty can be a cause and consequence of blindness. Some causes only affect the poorest communities (e.g. trachoma), and poor individuals are less likely to access services. In low income countries, cataract blind adults have been shown to be less economically active, indicating that blindness can exacerbate poverty. This study aims to explore associations between poverty and blindness using national survey data from Nigeria. Participants ≥40 years were examined in 305 clusters (2005-2007). Sociodemographic information, including literacy and occupation, was obtained by interview. Presenting visual acuity (PVA) was assessed using a reduced tumbling E LogMAR chart. Full ocular examination was undertaken by experienced ophthalmologists on all with PVA blind (PVA blindness were 8.5% (95% CI 7.7-9.5%), 2.5% (95% CI 2.0-3.1%), and 1.5% (95% CI 1.2-2.0%) in poorest, medium and affluent households, respectively (p = 0.001). Cause-specific prevalences of blindness from cataract, glaucoma, uncorrected aphakia and corneal opacities were significantly higher in poorer households. Cataract surgical coverage was low (37.2%), being lowest in females in poor households (25.3%). Spectacle coverage was 3 times lower in poor than affluent households (2.4% vs. 7.5%). In Nigeria, blindness is associated with poverty, in part reflecting lower access to services. Reducing avoidable causes will not be achieved unless access to services improves, particularly for the poor and women.

  7. Sensory augmentation for the blind

    Directory of Open Access Journals (Sweden)

    Silke Manuela Kärcher

    2012-03-01

    Full Text Available Enacted theories of consciousness conjecture that perception and cognition arise from an active experience of the regular relations that are tying together the sensory stimulation of different modalities and associated motor actions. Previous experiments investigated this concept by employing the technique of sensory substitution. Building on these studies, here we test a set of hypotheses derived from this framework and investigate the utility of sensory augmentation in handicapped people. We provide a late blind subject with a new set of sensorimotor laws: A vibro-tactile belt continually signals the direction of magnetic north. The subject completed a set of behavioral tests before and after an extended training period. The tests were complemented by questionnaires and interviews. This newly supplied information improved performance on different time scales. In a pointing task we demonstrate an instant improvement of performance based on the signal provided by the device. Furthermore, the signal was helpful in relevant daily tasks, often complicated for the blind, such as keeping a direction over longer distances or taking shortcuts in familiar environments. A homing task with an additional attentional load demonstrated a significant improvement after training. The subject found the directional information highly expedient for the adjustment of his inner maps of familiar environments and describes an increase in his feeling of security when exploring unfamiliar environments with the belt. The results give evidence for a firm integration of the newly supplied signals into the behavior of this late blind subject with better navigational performance and more courageous behavior in unfamiliar environments. Most importantly, the complementary information provided by the belt lead to a positive emotional impact with enhanced feeling of security. This experimental approach demonstrates the potential of sensory augmentation devices for the help of

  8. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  9. The sensory construction of dreams and nightmare frequency in congenitally blind and late blind individuals

    DEFF Research Database (Denmark)

    Meaidi, Amani; Jennum, Poul; Ptito, Maurice

    2014-01-01

    and anxiety levels. RESULTS: All blind participants had fewer visual dream impressions compared to SC participants. In LB participants, duration of blindness was negatively correlated with duration, clarity, and color content of visual dream impressions. CB participants reported more auditory, tactile......OBJECTIVES: We aimed to assess dream content in groups of congenitally blind (CB), late blind (LB), and age- and sex-matched sighted control (SC) participants. METHODS: We conducted an observational study of 11 CB, 14 LB, and 25 SC participants and collected dream reports over a 4-week period......, gustatory, and olfactory dream components compared to SC participants. In contrast, LB participants only reported more tactile dream impressions. Blind and SC participants did not differ with respect to emotional and thematic dream content. However, CB participants reported more aggressive interactions...

  10. Blind Cat

    Directory of Open Access Journals (Sweden)

    Arka Chattopadhyay

    2015-08-01

    There’s no way to know whether he was blind from birth or blindness was something he had picked up from his fights with other cats. He wasn’t an urban cat. He lived in a little village, soaked in the smell of fish with a river running right beside it. Cats like these have stories of a different kind. The two-storied hotel where he lived had a wooden floor. It stood right on the riverbank and had more than a tilt towards the river, as if deliberately leaning on the water.

  11. The blind hens’ challenge

    DEFF Research Database (Denmark)

    Sandøe, Peter; Hocking, Paul M.; Forkman, Björn

    2014-01-01

    about breeding blind hens. But we also argue that alternative views, which (for example) claim that it is important to respect the telos or rights of an animal, do not offer a more convincing solution to questions raised by the possibility of disenhancing animals for their own benefit.......Animal ethicists have recently debated the ethical questions raised by disenhancing animals to improve their welfare. Here, we focus on the particular case of breeding blind hens for commercial egg-laying systems, in order to benefit their welfare. Many people find breeding blind hens intuitively...

  12. Reaching for the Stars: A New NASA-National Federation of the Blind Initiative

    Science.gov (United States)

    Maynard, N. G.; Riccobono, M. A.

    2004-12-01

    The National Aeronautics and Space Administration (NASA) and the National Federation of the Blind (NFB) recently launched a unique new partnership which will inspire and empower blind youth to consider opportunities in science, technologies, engineering, and math related careers from which they have typically been excluded. This partnership presents a framework for successful cultivation of the next generation of scientists. By partnering with the NFB Jernigan Institute, a one of a kind research and training facility developed and directed by blind people, NASA has engaged the most powerful tool for tapping the potential of blind youth. By teaming NASA scientists and engineers with successful blind adults within a national organization, the NFB, this partnership has established an unparalleled pipeline of talent and imagination. The NASA/NFB partnership seeks to facilitate the means that will lead to increased science and technology employment opportunities for the blind, and particularly within NASA. The initiative is facilitating the development of education programs and products which will stimulate better educational opportunities and supports for blind youth in the STEM areas and better preparing them to enter the NASA employment path. In addition, the partnership brings the unique perspective of the blind to the continuing effort to develop improved space technologies, which may be applied for navigation and wayfinding, technologies for education and outreach, and technologies for improving access to information using nonvisual techniques. This presentation describes some of the activities accomplished in the first year of the partnership. Examples include the establishment of the first NFB Science Academy for Blind Youth which included two summer science camps supported by NASA. During the first camp session, twelve middle school age blind youth explored earth science concepts such as identification and characterization of soils, weather parameters, plants

  13. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  14. On imitation among young and blind children

    Directory of Open Access Journals (Sweden)

    Maria Rita Campello Rodrigues

    2016-07-01

    Full Text Available This article investigates the imitation among young and blind children. The survey was conducted as a mosaic in the time since the field considerations were taken from two areas: a professional experience with early stimulation of blind babies and a workshop with blind and low vision young between 13-18 years. By statingthe situated trace of knowledge, theresearch indicates that imitation among blind young people can be one of the ways of creating a common world among young blind and sighted people. Imitation among blind young is a multi-sensory process that requires a body experience, including both blind and people who see. The paper concludes with an indication of the unique character of imitation and at the same time, with the affirmation of its relevance to the development and inclusion process of both the child and the young blind.

  15. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Apostolos Meliones

    2018-01-01

    Full Text Available Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils, which has primarily addressed blind or visually impaired (BVI accessibility and self-guided tours in museums. A pilot prototype has been developed and is currently under evaluation at the Tactual Museum with the collaboration of the Lighthouse for the Blind of Greece. This paper describes the functionality of the application and evaluates candidate indoor location determination technologies, such as wireless local area network (WLAN and surface-mounted assistive tactile route indications combined with Bluetooth low energy (BLE beacons and inertial dead-reckoning functionality, to come up with a reliable and highly accurate indoor positioning system adopting the latter solution. The developed concepts, including map matching, a key concept for indoor navigation, apply in a similar way to other indoor guidance use cases involving complex indoor places, such as in hospitals, shopping malls, airports, train stations, public and municipality buildings, office buildings, university buildings, hotel resorts, passenger ships, etc. The presented Android application is effectively a Blind IndoorGuide system for accurate and reliable blind indoor navigation.

  16. Blind Estimation of the Phase and Carrier Frequency Offsets for LDPC-Coded Systems

    Directory of Open Access Journals (Sweden)

    Houcke Sebastien

    2010-01-01

    Full Text Available Abstract We consider in this paper the problem of phase offset and Carrier Frequency Offset (CFO estimation for Low-Density Parity-Check (LDPC coded systems. We propose new blind estimation techniques based on the calculation and minimization of functions of the Log-Likelihood Ratios (LLR of the syndrome elements obtained according to the parity check matrix of the error-correcting code. In the first part of this paper, we consider phase offset estimation for a Binary Phase Shift Keying (BPSK modulation and propose a novel estimation technique. Simulation results show that the proposed method is very effective and outperforms many existing algorithms. Then, we modify the estimation criterion so that it can work for higher-order modulations. One interesting feature of the proposed algorithm when applied to high-order modulations is that the phase offset of the channel can be blindly estimated without any ambiguity. In the second part of the paper, we consider the problem of CFO estimation and propose estimation techniques that are based on the same concept as the ones presented for the phase offset estimation. The Mean Squared Error (MSE and Bit Error Rate (BER curves show the efficiency of the proposed estimation techniques.

  17. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  18. A novel deconvolution method for modeling UDP-N-acetyl-D-glucosamine biosynthetic pathways based on 13C mass isotopologue profiles under non-steady-state conditions

    Directory of Open Access Journals (Sweden)

    Belshoff Alex C

    2011-05-01

    Full Text Available Abstract Background Stable isotope tracing is a powerful technique for following the fate of individual atoms through metabolic pathways. Measuring isotopic enrichment in metabolites provides quantitative insights into the biosynthetic network and enables flux analysis as a function of external perturbations. NMR and mass spectrometry are the techniques of choice for global profiling of stable isotope labeling patterns in cellular metabolites. However, meaningful biochemical interpretation of the labeling data requires both quantitative analysis and complex modeling. Here, we demonstrate a novel approach that involved acquiring and modeling the timecourses of 13C isotopologue data for UDP-N-acetyl-D-glucosamine (UDP-GlcNAc synthesized from [U-13C]-glucose in human prostate cancer LnCaP-LN3 cells. UDP-GlcNAc is an activated building block for protein glycosylation, which is an important regulatory mechanism in the development of many prominent human diseases including cancer and diabetes. Results We utilized a stable isotope resolved metabolomics (SIRM approach to determine the timecourse of 13C incorporation from [U-13C]-glucose into UDP-GlcNAc in LnCaP-LN3 cells. 13C Positional isotopomers and isotopologues of UDP-GlcNAc were determined by high resolution NMR and Fourier transform-ion cyclotron resonance-mass spectrometry. A novel simulated annealing/genetic algorithm, called 'Genetic Algorithm for Isotopologues in Metabolic Systems' (GAIMS was developed to find the optimal solutions to a set of simultaneous equations that represent the isotopologue compositions, which is a mixture of isotopomer species. The best model was selected based on information theory. The output comprises the timecourse of the individual labeled species, which was deconvoluted into labeled metabolic units, namely glucose, ribose, acetyl and uracil. The performance of the algorithm was demonstrated by validating the computed fractional 13C enrichment in these subunits

  19. Privacy Preserving Similarity Based Text Retrieval through Blind Storage

    Directory of Open Access Journals (Sweden)

    Pinki Kumari

    2016-09-01

    Full Text Available Cloud computing is improving rapidly due to their more advantage and more data owners give interest to outsource their data into cloud storage for centralize their data. As huge files stored in the cloud storage, there is need to implement the keyword based search process to data user. At the same time to protect the privacy of data, encryption techniques are used for sensitive data, that encryption is done before outsourcing data to cloud server. But it is critical to search results in encryption data. In this system we propose similarity text retrieval from the blind storage blocks with encryption format. This system provides more security because of blind storage system. In blind storage system data is stored randomly on cloud storage.  In Existing Data Owner cannot encrypt the document data as it was done only at server end. Everyone can access the data as there was no private key concept applied to maintained privacy of the data. But In our proposed system, Data Owner can encrypt the data himself using RSA algorithm.  RSA is a public key-cryptosystem and it is widely used for sensitive data storage over Internet. In our system we use Text mining process for identifying the index files of user documents. Before encryption we also use NLP (Nature Language Processing technique to identify the keyword synonyms of data owner document. Here text mining process examines text word by word and collect literal meaning beyond the words group that composes the sentence. Those words are examined in API of word net so that only equivalent words can be identified for index file use. Our proposed system provides more secure and authorized way of recover the text in cloud storage with access control. Finally, our experimental result shows that our system is better than existing.

  20. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  1. Psychologica and social adjustment to blindness: Understanding ...

    African Journals Online (AJOL)

    Psychologica and social adjustment to blindness: Understanding from two groups of blind people in Ilorin, Nigeria. ... Background: Blindness can cause psychosocial distress leading to maladjustment if not mitigated. Maladjustment is a secondary burden that further reduces quality of life of the blind. Adjustment is often ...

  2. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  3. The sensory construction of dreams and nightmare frequency in congenitally blind and late blind individuals.

    Science.gov (United States)

    Meaidi, Amani; Jennum, Poul; Ptito, Maurice; Kupers, Ron

    2014-05-01

    We aimed to assess dream content in groups of congenitally blind (CB), late blind (LB), and age- and sex-matched sighted control (SC) participants. We conducted an observational study of 11 CB, 14 LB, and 25 SC participants and collected dream reports over a 4-week period. Every morning participants filled in a questionnaire related to the sensory construction of the dream, its emotional and thematic content, and the possible occurrence of nightmares. We also assessed participants' ability of visual imagery during waking cognition, sleep quality, and depression and anxiety levels. All blind participants had fewer visual dream impressions compared to SC participants. In LB participants, duration of blindness was negatively correlated with duration, clarity, and color content of visual dream impressions. CB participants reported more auditory, tactile, gustatory, and olfactory dream components compared to SC participants. In contrast, LB participants only reported more tactile dream impressions. Blind and SC participants did not differ with respect to emotional and thematic dream content. However, CB participants reported more aggressive interactions and more nightmares compared to the other two groups. Our data show that blindness considerably alters the sensory composition of dreams and that onset and duration of blindness plays an important role. The increased occurrence of nightmares in CB participants may be related to a higher number of threatening experiences in daily life in this group. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. 42 CFR 435.530 - Definition of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Definition of blindness. 435.530 Section 435.530... ISLANDS, AND AMERICAN SAMOA Categorical Requirements for Eligibility Blindness § 435.530 Definition of blindness. (a) Definition. The agency must use the same definition of blindness as used under SSI, except...

  5. "VisionTouch Phone" for the Blind.

    Science.gov (United States)

    Yong, Robest

    2013-10-01

    Our objective is to enable the blind to use smartphones with touchscreens to make calls and to send text messages (sms) with ease, speed, and accuracy. We believe that with our proposed platform, which enables the blind to locate the position of the keypads, new games and education, and safety applications will be increasingly developed for the blind. This innovative idea can also be implemented on tablets for the blind, allowing them to use information websites such as Wikipedia and newspaper portals.

  6. Motor development of blind toddler

    OpenAIRE

    Likar, Petra

    2013-01-01

    For blind toddlers, development of motor skills enables possibilities for learning and exploring the environment. The purpose of this graduation thesis is to systematically mark the milestones in development of motor skills in blind toddlers, to establish different factors which affect this development, and to discover different ways for teachers for visually impaired and parents to encourage development of motor skills. It is typical of blind toddlers that they do not experience a wide varie...

  7. Causes of Severe Visual Impairment and Blindness: Comparative Data From Bhutanese and Laotian Schools for the Blind.

    Science.gov (United States)

    Farmer, Lachlan David Mailey; Ng, Soo Khai; Rudkin, Adam; Craig, Jamie; Wangmo, Dechen; Tsang, Hughie; Southisombath, Khamphoua; Griffiths, Andrew; Muecke, James

    2015-01-01

    To determine and compare the major causes of childhood blindness and severe visual impairment in Bhutan and Laos. Independent cross-sectional surveys. This survey consists of 2 cross-sectional observational studies. The Bhutanese component was undertaken at the National Institute for Vision Impairment, the only dedicated school for the blind in Bhutan. The Laotian study was conducted at the National Ophthalmology Centre and Vientiane School for the Blind. Children younger than age 16 were invited to participate. A detailed history and examination were performed consistent with the World Health Organization Prevention of Blindness Eye Examination Record. Of the 53 children examined in both studies, 30 were from Bhutan and 23 were from Laos. Forty percent of Bhutanese and 87.1% of Laotian children assessed were blind, with 26.7% and 4.3%, respectively, being severely visually impaired. Congenital causes of blindness were the most common, representing 45% and 43.5% of the Bhutanese and Laotian children, respectively. Anatomically, the primary site of blinding pathology differed between the cohorts. In Bhutan, the lens comprised 25%, with whole globe at 20% and retina at 15%, but in Laos, whole globe and cornea equally contributed at 30.4%, followed by retina at 17.4%. There was an observable difference in the rates of blindness/severe visual impairment due to measles, with no cases observed in the Bhutanese children but 20.7% of the total pathologies in the Laotian children attributable to congenital measles infection. Consistent with other studies, there is a high rate of blinding disease, which may be prevented, treated, or ameliorated.

  8. 20 CFR 416.983 - How we evaluate statutory blindness.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false How we evaluate statutory blindness. 416.983... AGED, BLIND, AND DISABLED Determining Disability and Blindness Blindness § 416.983 How we evaluate statutory blindness. We will find that you are blind if you are statutorily blind within the meaning of...

  9. Causes of severe visual impairment and blindness in students in schools for the blind in Northwest Ethiopia.

    Science.gov (United States)

    Asferaw, Mulusew; Woodruff, Geoffrey; Gilbert, Clare

    2017-01-01

    To determine the causes of severe visual impairment and blindness (SVI/BL) among students in schools for the blind in Northwest Ethiopia and to identify preventable and treatable causes. Students attending nine schools for the blind in Northwest Ethiopia were examined and causes assigned using the standard WHO record form for children with blindness and low vision in May and June 2015. 383 students were examined, 357 (93%) of whom were severely visually impaired or blind (blind and four were SVI, total 104. The major anatomical site of visual loss among those 0-15 years was cornea/phthisis (47.1%), usually due to measles and vitamin A deficiency, followed by whole globe (22.1%), lens (9.6%) and uvea (8.7%). Among students aged 16 years and above, corneal/phthisis (76.3%) was the major anatomical cause, followed by lens (6.3%), whole globe (4.7%), uvea (3.6%) and optic nerve (3.2%). The leading underlying aetiology among students aged blindness, mainly as the result of measles and vitamin A deficiency, is still a public health problem in Northwest Ethiopia, and this has not changed as observed in other low-income countries. More than three-fourth of causes of SVI/BL in students in schools for the blind are potentially avoidable, with measles/vitamin A deficiency and cataract being the leading causes.

  10. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...

  11. The deconvolution of sputter-etching surface concentration measurements to determine impurity depth profiles

    International Nuclear Information System (INIS)

    Carter, G.; Katardjiev, I.V.; Nobes, M.J.

    1989-01-01

    The quasi-linear partial differential continuity equations that describe the evolution of the depth profiles and surface concentrations of marker atoms in kinematically equivalent systems undergoing sputtering, ion collection and atomic mixing are solved using the method of characteristics. It is shown how atomic mixing probabilities can be deduced from measurements of ion collection depth profiles with increasing ion fluence, and how this information can be used to predict surface concentration evolution. Even with this information, however, it is shown that it is not possible to deconvolute directly the surface concentration measurements to provide initial depth profiles, except when only ion collection and sputtering from the surface layer alone occur. It is demonstrated further that optimal recovery of initial concentration depth profiles could be ensured if the concentration-measuring analytical probe preferentially sampled depths near and at the maximum depth of bombardment-induced perturbations. (author)

  12. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  13. 20 CFR 416.982 - Blindness under a State plan.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Blindness under a State plan. 416.982 Section..., BLIND, AND DISABLED Determining Disability and Blindness Blindness § 416.982 Blindness under a State... plan because of your blindness for the month of December 1973; and (c) You continue to be blind as...

  14. Blindness and Insight in King Lear

    Institute of Scientific and Technical Information of China (English)

    岳元玉

    2008-01-01

    This paper intends to explore how William Shakespeare illustrates the theme of blindness and insight in his great tragedy "King Lear".Four characters’ deeds and their fate are used as a case study to examine what blindness is,what insight is,and the relationship between the two.The writer finds that by depicting the characters’ deeds and their fate in a double plot,Shakespeare renders the folly of blindness,the transition from blindness to insight,and the use of reason and thought to understand the truth.

  15. 42 CFR 436.530 - Definition of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Definition of blindness. 436.530 Section 436.530... Requirements for Medicaid Eligibility Blindness § 436.530 Definition of blindness. (a) Definition. The agency must use the definition of blindness that is used in the State plan for AB or AABD. (b) State plan...

  16. Preserved sleep microstructure in blind individuals

    DEFF Research Database (Denmark)

    Aubin, Sébrina; Christensen, Julie A.E.; Jennum, Poul

    2018-01-01

    , as light is the primary zeitgeber of the master biological clock found in the suprachiasmatic nucleus of the hypothalamus. In addition, a greater number of sleep disturbances is often reported in blind individuals. Here, we examined various electroencephalographic microstructural components of sleep, both...... during rapid-eye-movement (REM) sleep and non-REM (NREM) sleep, between blind individuals, including both of early and late onset, and normal-sighted controls. During wakefulness, occipital alpha oscillations were lower, or absent in blind individuals. During sleep, differences were observed across...... electrode derivations between the early and late blind samples, which may reflect altered cortical networking in early blindness. Despite these differences in power spectra density, the electroencephalography microstructure of sleep, including sleep spindles, slow wave activity, and sawtooth waves, remained...

  17. Blindness and the age of enlightenment: Diderot's letter on the blind.

    Science.gov (United States)

    Margo, Curtis E; Harman, Lynn E; Smith, Don B

    2013-01-01

    Several months after anonymously publishing an essay in 1749 with the title "Letter on the Blind for the Use of Those Who Can See," the chief editor of the French Encyclopédie was arrested and taken to the prison fortress of Vincennes just east of Paris, France. The correctly assumed author, Denis Diderot, was 35 years old and had not yet left his imprint on the Age of Enlightenment. His letter, which recounted the life of Nicolas Saunderson, a blind mathematician, was intended to advance secular empiricism and disparage the religiously tinged rationalism put forward by Rene Descartes. The letter's discussion of sensory perception in men born blind dismissed the supposed primacy of visual imagery in abstract thinking. The essay did little to resolve any philosophical controversy, but it marked a turning point in Western attitudes toward visual disability.

  18. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  19. LLL/DOR seismic conservatism of operating plants project. Interm report on Task II.1.3: soil-structure interaction. Deconvolution of the June 7, 1975, Ferndale Earthquake at the Humboldt Bay Power Plant

    International Nuclear Information System (INIS)

    Maslenikov, O.R.; Smith, P.D.

    1978-01-01

    The Ferndale Earthquake of June 7, 1975, provided a unique opportunity to study the accuracy of seismic soil-structure interaction methods used in the nuclear industry because, other than this event, there have been no cases of significant earthquakes for which moderate motions of nuclear plants have been recorded. Future studies are planned which will evaluate the soil-structure interaction methodology further, using increasingly complex methods as required. The first step in this task was to perform deconvolution and soil-structure interaction analyses for the effects of the Ferndale earthquake at the Humboldt Bay Power Plant site. The deconvolution analyses of bedrock motions performed are compared as well as additional studies on analytical sensitivity

  20. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  1. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding

    DEFF Research Database (Denmark)

    Hróbjartsson, A; Forfang, E; Haahr, M T

    2007-01-01

    Blinding can reduce bias in randomized clinical trials, but blinding procedures may be unsuccessful. Our aim was to assess how often randomized clinical trials test the success of blinding, the methods involved and how often blinding is reported as being successful....

  2. Blind Decoding of Multiple Description Codes over OFDM Systems via Sequential Monte Carlo

    Directory of Open Access Journals (Sweden)

    Guo Dong

    2005-01-01

    Full Text Available We consider the problem of transmitting a continuous source through an OFDM system. Multiple description scalar quantization (MDSQ is applied to the source signal, resulting in two correlated source descriptions. The two descriptions are then OFDM modulated and transmitted through two parallel frequency-selective fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. A blind soft-input soft-output OFDM detector is developed, which is based on the techniques of importance sampling and resampling. Such a detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we also treat channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ source decoding.

  3. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    Science.gov (United States)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  4. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    International Nuclear Information System (INIS)

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix; Voogt, Pim de

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually

  5. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    Energy Technology Data Exchange (ETDEWEB)

    Bade, Richard [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Causanilles, Ana; Emke, Erik [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Voogt, Pim de, E-mail: w.p.devoogt@uva.nl [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam (Netherlands)

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually.

  6. Prevalence and causes of low vision and blindness in the blind population supported by the Yazd Welfare Organization

    Directory of Open Access Journals (Sweden)

    F Ezoddini - Ardakani

    2006-01-01

    Full Text Available Introduction: In 1995, the World Health Organization (WHO estimated that there were 37.1 million blind people worldwide. It has subsequently been reported that 110 million people have severely impaired vision, hence are at great risk of becoming blind. Watkins predicted an annual increase of about two million blind worldwide. This study was designed to investigate the causes of blindness and low vision in the blind population supported by the welfare organization of Yazd, Iran. Methods: This clinical descriptive cross-sectional study was done from January to September, 2003. In total, 109 blind patients supported by the welfare organization were included in this study. All data was collected by standard methods using questionnaire, interview and specific examination. The data included; demographic characteristics, clinical states, ophthalmic examination, family history and the available prenatal information. The data were analyzed by SPSS software and chi square test. Results: Of total patients, 73 cases were male (67% and 36 were female (33%. The median age was 24.6 years (range one month to 60 years. More than half of the cases (53.2% could be diagnosed in children less than one year of age. In total, 79 patients (88.1% were legally blind of which 23 cases (29.1% had no light perception (NLP. The most common causes of blindness were retinitis pigmentosa (32.1% followed by ocular dysgenesis (16.5%. Conclusion: Our data showed that more than half of the blindness cases occur during the first year of life. The most common cause of blindness was retinitis pigmentosa followed by ocular dysgenesis, cataract and glaucoma, respectively.

  7. BAYESIAN SEMI-BLIND COMPONENT SEPARATION FOR FOREGROUND REMOVAL IN INTERFEROMETRIC 21 cm OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Le; Timbie, Peter T. [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Bunn, Emory F. [Physics Department, University of Richmond, Richmond, VA 23173 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, P. M. [Center for Cosmology and Astro-Particle Physics, Ohio State University, Columbus, OH 43210 (United States); Wandelt, Benjamin D., E-mail: lzhang263@wisc.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 W Green Street, Urbana, IL 61801 (United States)

    2016-01-15

    In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.

  8. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  9. Keratoprostheses for corneal blindness: a review of contemporary devices

    Science.gov (United States)

    Avadhanam, Venkata S; Smith, Helen E; Liu, Christopher

    2015-01-01

    According to the World Health Organization, globally 4.9 million are blind due to corneal pathology. Corneal transplantation is successful and curative of the blindness for a majority of these cases. However, it is less successful in a number of diseases that produce corneal neovascularization, dry ocular surface and recurrent inflammation, or infections. A keratoprosthesis or KPro is the only alternative to restore vision when corneal graft is a doomed failure. Although a number of KPros have been proposed, only two devices, Boston type-1 KPro and osteo-odonto-KPro, have came to the fore. The former is totally synthetic and the latter is semi-biological in constitution. These two KPros have different surgical techniques and indications. Keratoprosthetic surgery is complex and should only be undertaken in specialized centers, where expertise, multidisciplinary teams, and resources are available. In this article, we briefly discuss some of the prominent historical KPros and contemporary devices. PMID:25945031

  10. A survey of severe visual impairment and blindness in children attending thirteen schools for the blind in sri lanka.

    Science.gov (United States)

    Gao, Zoe; Muecke, James; Edussuriya, Kapila; Dayawansa, Ranasiri; Hammerton, Michael; Kong, Aimee; Sennanayake, Saman; Senaratne, Tissa; Marasinghe, Nirosha; Selva, Dinesh

    2011-02-01

    To identify the causes of blindness and severe visual impairment (BL/SVI) in children attending schools for the blind in Sri Lanka, and to provide optical devices and ophthalmic treatment where indicated. Two hundred and six children under 16 years from 13 schools for the blind in Sri Lanka were examined by a team of ophthalmologists and optometrists. Data were entered in the World Health Organization Prevention of Blindness Eye Examination Record for Childhood Blindness (WHO/PBL ERCB). Of the 206 children, 83.5% were blind (BL = Visual acuity [VA] schools for the blind in Sri Lanka had potentially avoidable causes of BL/SVI. Vision could also be improved in a third of children. The data support the need to develop specialized pediatric ophthalmic services, particularly in the face of advancing neonatal life support in Sri Lanka, and the need for increased provision of optical support.

  11. Blind I/Q Signal Separation-Based Solutions for Receiver Signal Processing

    Directory of Open Access Journals (Sweden)

    Visa Koivunen

    2005-09-01

    Full Text Available This paper introduces some novel digital signal processing (DSP-based approaches to some of the most fundamental tasks of radio receivers, namely, channel equalization, carrier synchronization, and I/Q mismatch compensation. The leading principle is to show that all these problems can be solved blindly (i.e., without training signals by forcing the I and Q components of the observed data as independent as possible. Blind signal separation (BSS is then introduced as an efficient tool to carry out these tasks, and simulation examples are used to illustrate the performance of the proposed approaches. The main application area of the presented carrier synchronization and I/Q mismatch compensation techniques is in direct-conversion type receivers, while the proposed channel equalization principles basically apply to any radio architecture.

  12. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  13. Adapting Advanced Inorganic Chemistry Lecture and Laboratory Instruction for a Legally Blind Student

    Science.gov (United States)

    Miecznikowski, John R.; Guberman-Pfeffer, Matthew J.; Butrick, Elizabeth E.; Colangelo, Julie A.; Donaruma, Cristine E.

    2015-01-01

    In this article, the strategies and techniques used to successfully teach advanced inorganic chemistry, in the lecture and laboratory, to a legally blind student are described. At Fairfield University, these separate courses, which have a physical chemistry corequisite or a prerequisite, are taught for junior and senior chemistry and biochemistry…

  14. Prevalence and causes of corneal blindness.

    Science.gov (United States)

    Wang, Haijing; Zhang, Yaoguang; Li, Zhijian; Wang, Tiebin; Liu, Ping

    2014-04-01

    The study aimed to assess the prevalence and causes of corneal blindness in a rural northern Chinese population. Cross-sectional study. The cluster random sampling method was used to select the sample. This population-based study included 11 787 participants of all ages in rural Heilongjiang Province, China. These participants underwent a detailed interview and eye examination that included the measurement of visual acuity, slit-lamp biomicroscopy and direct ophthalmoscopy. An eye was considered to have corneal blindness if the visual acuity was blindness and low vision. Among the 10 384 people enrolled in the study, the prevalence of corneal blindness is 0.3% (95% confidence interval 0.2-0.4%). The leading cause was keratitis in childhood (40.0%), followed by ocular trauma (33.3%) and keratitis in adulthood (20.0%). Age and illiteracy were found to be associated with an increased prevalence of corneal blindness. Blindness because of corneal diseases in rural areas of Northern China is a significant public health problem that needs to be given more attention. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  15. Memory for environmental sounds in sighted, congenitally blind and late blind adults: evidence for cross-modal compensation.

    Science.gov (United States)

    Röder, Brigitte; Rösler, Frank

    2003-10-01

    Several recent reports suggest compensatory performance changes in blind individuals. It has, however, been argued that the lack of visual input leads to impoverished semantic networks resulting in the use of data-driven rather than conceptual encoding strategies on memory tasks. To test this hypothesis, congenitally blind and sighted participants encoded environmental sounds either physically or semantically. In the recognition phase, both conceptually as well as physically distinct and physically distinct but conceptually highly related lures were intermixed with the environmental sounds encountered during study. Participants indicated whether or not they had heard a sound in the study phase. Congenitally blind adults showed elevated memory both after physical and semantic encoding. After physical encoding blind participants had lower false memory rates than sighted participants, whereas the false memory rates of sighted and blind participants did not differ after semantic encoding. In order to address the question if compensatory changes in memory skills are restricted to critical periods during early childhood, late blind adults were tested with the same paradigm. When matched for age, they showed similarly high memory scores as the congenitally blind. These results demonstrate compensatory performance changes in long-term memory functions due to the loss of a sensory system and provide evidence for high adaptive capabilities of the human cognitive system.

  16. Rapid assessment of avoidable blindness and diabetic retinopathy in Republic of Moldova.

    Science.gov (United States)

    Zatic, Tatiana; Bendelic, Eugen; Paduca, Ala; Rabiu, Mansour; Corduneanu, Angela; Garaba, Angela; Novac, Victoria; Curca, Cristina; Sorbala, Inga; Chiaburu, Andrei; Verega, Florentina; Andronic, Victoria; Guzun, Irina; Căpăţină, Olga; Zamă-Mardari, Iulea

    2015-06-01

    To evaluate the prevalence and causes of blindness and visual impairment, the prevalence of diabetes mellitus and diabetic retinopathy among people aged ≥50 years in the Republic of Moldova using Rapid Assessment of Avoidable Blindness plus Diabetic Retinopathy ('RAAB+DR') techniques. 111 communities of people aged ≥50 years were randomly selected. In addition to standard RAAB procedures in all people with diabetes (previous history of the disease or with a random blood glucose level >11.1 mm/L (200 mg/dL)), a dilated fundus examination was performed to assess the presence and the degree of diabetic retinopathy using the Scottish DR grading system. 3877 (98%) people out of the 3885 eligible people were examined. The prevalence of blindness was 1.4% (95% CI 1.0% to 1.8%). The major causes of blindness and severe visual impairment were untreated cataract (58.2%), glaucoma (10.9%), and other posterior segment causes (10.9%). The estimated prevalence of diabetes was 11.4%. Among all people with diabetes, 55.9% had some form of retinopathy, and sight threatening diabetic retinopathy affected 14.6%. The RAAB+DR survey in the Republic of Moldova established that untreated cataract is the major cause of avoidable blindness in rural areas. This needs to be tackled by expanding the geographical coverage of cataract surgical services. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Causes of blindness in a special education school.

    Science.gov (United States)

    Onakpoya, O H; Adegbehingbe, B O; Omotoye, O J; Adeoye, A O

    2011-01-01

    Blind children and young adults have to overcome a lifetime of emotional, social and economic difficulties. They employ non-vision dependent methods for education. To assess the causes of blindness in a special school in southwestern Nigeria to aid the development of efficient blindness prevention programmes. A cross-sectional survey of the Ekiti State Special Education School, Nigeria was conducted in May-June 2008 after approval from the Ministry of Education. All students in the blind section were examined for visual acuity, pen-torch eye examination and dilated fundoscopy in addition to taking biodata and history. Thirty blind students with mean age of 18±7.3 years and male: female ratio of 1.7:1 were examined. Blindness resulted commonly from cataract eight (26.7%), glaucoma six (20%) retinitis pigmentosa four (16.7%) and posttraumatic phthysis bulbi two (6.7%). Blindness was avoidable in 18 (61%) of cases. Glaucoma blindness was associated with redness, pain, lacrimation and photophobia in 15 (50%) and hyphaema in 16.7% of students; none of these students were on any medication at the time of study. The causes of blindness in rehabilitation school for the blind are largely avoidable and glaucoma-blind pupils face additional painful eye related morbidity during rehabilitation. While preventive measures and early intervention are needful against childhood cataract and glaucoma, regular ophthalmic consultations and medications are needed especially for glaucoma blind pupils.

  18. Tactile Sensitivity and Braille Reading in People with Early Blindness and Late Blindness

    Science.gov (United States)

    Oshima, Kensuke; Arai, Tetsuya; Ichihara, Shigeru; Nakano, Yasushi

    2014-01-01

    Introduction: The inability to read quickly can be a disadvantage throughout life. This study focused on the associations of braille reading fluency and individual factors, such as the age at onset of blindness and number of years reading braille, and the tactile sensitivity of people with early and late blindness. The relationship between reading…

  19. Blindness and cataract in children in developing countries

    Directory of Open Access Journals (Sweden)

    Parikshit Gogate

    2009-03-01

    Full Text Available Blindness in children is considered a priority area for VISION 2020, as visually impaired children have a lifetime of blindness ahead of them. Various studies across the globe show that one-third to half of childhood blindness is either preventable or treatable1 and that cataract is the leading treatable cause of blindness in children.The 8th General Assembly of the International Agency for the Prevention of Blindness (IAPB provided an opportunity to be acquainted with recent research and programme development work in the prevention of childhood blindness.

  20. In blind pursuit of racial equality?

    Science.gov (United States)

    Apfelbaum, Evan P; Pauker, Kristin; Sommers, Samuel R; Ambady, Nalini

    2010-11-01

    Despite receiving little empirical assessment, the color-blind approach to managing diversity has become a leading institutional strategy for promoting racial equality, across domains and scales of practice. We gauged the utility of color blindness as a means to eliminating future racial inequity--its central objective--by assessing its impact on a sample of elementary-school students. Results demonstrated that students exposed to a color-blind mind-set, as opposed to a value-diversity mind-set, were actually less likely both to detect overt instances of racial discrimination and to describe such events in a manner that would prompt intervention by certified teachers. Institutional messages of color blindness may therefore artificially depress formal reporting of racial injustice. Color-blind messages may thus appear to function effectively on the surface even as they allow explicit forms of bias to persist.

  1. Visual impairment and blindness among the students of blind schools in Allahabad and its vicinity: A causal assessment.

    Science.gov (United States)

    Bhalerao, Sushank Ashok; Tandon, Mahesh; Singh, Satyaprakash; Dwivedi, Shraddha; Kumar, Santosh; Rana, Jagriti

    2015-03-01

    Information on eye diseases in blind school children in Allahabad is rare and sketchy. A cross-sectional study was performed to identify causes of blindness (BL) in blind school children with an aim to gather information on ocular morbidity in the blind schools in Allahabad and in its vicinity. A cross-sectional study was carried out in all the four blind schools in Allahabad and its vicinity. The students in the blind schools visited were included in the study and informed consents from parents were obtained. Relevant ocular history and basic ocular examinations were carried out on the students of the blind schools. A total of 90 students were examined in four schools of the blind in Allahabad and in the vicinity. The main causes of severe visual impairment and BL in the better eye of students were microphthalmos (34.44%), corneal scar (22.23%), anophthalmos (14.45%), pseudophakia (6.67%), optic nerve atrophy (6.67%), buphthalmos/glaucoma (3.33%), cryptophthalmos (2.22%), staphyloma (2.22%), cataract (2.22%), retinal dystrophy (2.22%), aphakia (1.11%), coloboma (1.11%), retinal detachment (1.11%), etc. Of these, 22 (24.44%) students had preventable causes of BL and another 12 (13.33%) students had treatable causes of BL. It was found that hereditary diseases, corneal scar, glaucoma and cataract were the prominent causes of BL among the students of blind schools. Almost 38% of the students had preventable or treatable causes, indicating the need of genetical counseling and focused intervention.

  2. Visual impairment and blindness among the students of blind schools in Allahabad and its vicinity: A causal assessment

    Directory of Open Access Journals (Sweden)

    Sushank Ashok Bhalerao

    2015-01-01

    Full Text Available Background/Aims: Information on eye diseases in blind school children in Allahabad is rare and sketchy. A cross-sectional study was performed to identify causes of blindness (BL in blind school children with an aim to gather information on ocular morbidity in the blind schools in Allahabad and in its vicinity. Study Design and Setting: A cross-sectional study was carried out in all the four blind schools in Allahabad and its vicinity. Materials and Methods: The students in the blind schools visited were included in the study and informed consents from parents were obtained. Relevant ocular history and basic ocular examinations were carried out on the students of the blind schools. Results: A total of 90 students were examined in four schools of the blind in Allahabad and in the vicinity. The main causes of severe visual impairment and BL in the better eye of students were microphthalmos (34.44%, corneal scar (22.23%, anophthalmos (14.45%, pseudophakia (6.67%, optic nerve atrophy (6.67%, buphthalmos/glaucoma (3.33%, cryptophthalmos (2.22%, staphyloma (2.22%, cataract (2.22%, retinal dystrophy (2.22%, aphakia (1.11%, coloboma (1.11%, retinal detachment (1.11%, etc. Of these, 22 (24.44% students had preventable causes of BL and another 12 (13.33% students had treatable causes of BL. Conclusion: It was found that hereditary diseases, corneal scar, glaucoma and cataract were the prominent causes of BL among the students of blind schools. Almost 38% of the students had preventable or treatable causes, indicating the need of genetical counseling and focused intervention.

  3. Suspected-target pesticide screening using gas chromatography-quadrupole time-of-flight mass spectrometry with high resolution deconvolution and retention index/mass spectrum library.

    Science.gov (United States)

    Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong

    2014-10-01

    A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues

  4. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images.

    Science.gov (United States)

    Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong

    2017-01-01

    Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a

  5. Future trends in global blindness

    Directory of Open Access Journals (Sweden)

    Serge Resnikoff

    2012-01-01

    Full Text Available The objective of this review is to discuss the available data on the prevalence and causes of global blindness, and some of the associated trends and limitations seen. A literature search was conducted using the terms "global AND blindness" and "global AND vision AND impairment", resulting in seven appropriate articles for this review. Since 1990 the estimate of global prevalence of blindness has gradually decreased when considering the best corrected visual acuity definition: 0.71% in 1990, 0.59% in 2002, and 0.55% in 2010, corresponding to a 0.73% reduction per year over the 2002-2010 period. Significant limitations were found in the comparability between the global estimates in prevalence or causes of blindness or visual impairment. These limitations arise from various factors such as uncertainties about the true cause of the impairment, the use of different definitions and methods, and the absence of data from a number of geographical areas, leading to various extrapolation methods, which in turn seriously limit comparability. Seminal to this discussion on limitations in the comparability of studies and data, is that blindness has historically been defined using best corrected visual acuity.

  6. Time, Temperature, and Cationic Dependence of Alkali Activation of Slag: Insights from Fourier Transform Infrared Spectroscopy and Spectral Deconvolution.

    Science.gov (United States)

    Dakhane, Akash; Madavarapu, Sateesh Babu; Marzke, Robert; Neithalath, Narayanan

    2017-08-01

    The use of waste/by-product materials, such as slag or fly ash, activated using alkaline agents to create binding materials for construction applications (in lieu of portland cement) is on the rise. The influence of activation parameters (SiO 2 to Na 2 O ratio or M s of the activator, Na 2 O to slag ratio or n, cation type K + or Na + ) on the process and extent of alkali activation of slag under ambient and elevated temperature curing, evaluated through spectroscopic techniques, is reported in this paper. Fourier transform infrared spectroscopy along with a Fourier self-deconvolution method is used. The major spectral band of interest lies in the wavenumber range of ∼950 cm -1 , corresponding to the antisymmetric stretching vibration of Si-O-T (T = Si or Al) bonds. The variation in the spectra with time from 6 h to 28 days is attributed to the incorporation of Al in the gel structure and the enhancement in degree of polymerization of the gel. 29 Si nuclear magnetic resonance spectroscopy is used to quantify the Al incorporation with time, which is found to be higher when Na silicate is used as the activator. The Si-O-T bond wavenumbers are also generally lower for the Na silicate activated systems.

  7. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  8. Causes of visual impairment and blindness in children at Instituto Benjamin Constant Blind School, Rio de Janeiro

    Directory of Open Access Journals (Sweden)

    Daniela da Silva Verzoni

    Full Text Available Abstract Objective: To determine the main causes of visual impairment and blindness in children enrolled at Instituto Benjamin Constant blind school (IBC in 2013, to aid in planning for the prevention and management of avoidable causes of blindness. Methods: Study design: cross-sectional observational study. Data was collected from medical records of students attending IBC in 2013. Causes of blindness were classified according to WHO/PBL examination record. Data were analyzed for those children aged less than 16 years using Stata 9 program. Results: Among 355 students attending IBC in 2013, 253 (73% were included in this study. Of these children, 190 (75% were blind and 63 (25% visually impaired. The major anatomical site of visual loss was retina (42%, followed by lesions of the globe (22%, optic nerve lesions (13.8%, central nervous system (8.8% and cataract/pseudophakia/aphakia (8.8%. The etiology was unknown in 41.9% and neonatal factors accounted for 30,9% of cases. Forty-eight percent of cases were potentially avoidable. Retinopathy of prematurity (ROP was the main cause of blindness and with microphthalmia, optic nerve atrophy, cataract and glaucoma accounted for more than 50% of cases. Conclusion: Provision and improvement of ROP, cataract and glaucoma screening and treatment and programs could prevent avoidable visual impairment and blindness.

  9. Choice blindness in financial decision making

    Directory of Open Access Journals (Sweden)

    Owen McLaughlin

    2013-09-01

    Full Text Available Choice Blindness is an experimental paradigm that examines the interplay between individuals' preferences, decisions, and expectations by manipulating the relationship between intention and choice. This paper expands upon the existing Choice Blindness framework by investigating the presence of the effect in an economically significant decision context, specifically that of pension choice. In addition, it investigates a number of secondary factors hypothesized to modulate Choice Blindness, including reaction time, risk preference, and decision complexity, as well as analysing the verbal reports of non-detecting participants. The experiment was administered to 100 participants of mixed age and educational attainment. The principal finding was that no more than 37.2% of manipulated trials were detected over all conditions, a result consistent with previous Choice Blindness research. Analysis of secondary factors found that reaction time, financial sophistication and decision complexity were significant predictors of Choice Blindness detection, while content analysis of non-detecting participant responses found that 20% implied significant preference changes and 62% adhered to initial preferences. Implications of the Choice Blindness effect in the context of behavioural economics are discussed, and an agenda for further investigation of the paradigm in this context is outlined.

  10. Sad Facial Expressions Increase Choice Blindness.

    Science.gov (United States)

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  11. Semi-parametric arterial input functions for quantitative dynamic contrast enhanced magnetic resonance imaging in mice

    Czech Academy of Sciences Publication Activity Database

    Taxt, T.; Reed, R. K.; Pavlin, T.; Rygh, C. B.; Andersen, E.; Jiřík, Radovan

    2018-01-01

    Roč. 46, FEB (2018), s. 10-20 ISSN 0730-725X R&D Projects: GA ČR GA17-13830S; GA MŠk(CZ) LO1212 Institutional support: RVO:68081731 Keywords : DCE-MRI * blind deconvolution * arterial input function Subject RIV: FA - Cardiovascular Diseases incl. Cardiotharic Surgery Impact factor: 2.225, year: 2016

  12. Testing Children for Color Blindness

    Science.gov (United States)

    ... Stories Español Eye Health / News Testing Children for Color Blindness Leer en Español: Pruebas para Detectar Daltonismo en ... study shows that kids can be tested for color blindness as soon as age 4, finds Caucasian boys ...

  13. Blind Braille readers mislocate tactile stimuli.

    Science.gov (United States)

    Sterr, Annette; Green, Lisa; Elbert, Thomas

    2003-05-01

    In a previous experiment, we observed that blind Braille readers produce errors when asked to identify on which finger of one hand a light tactile stimulus had occurred. With the present study, we aimed to specify the characteristics of this perceptual error in blind and sighted participants. The experiment confirmed that blind Braille readers mislocalised tactile stimuli more often than sighted controls, and that the localisation errors occurred significantly more often at the right reading hand than at the non-reading hand. Most importantly, we discovered that the reading fingers showed the smallest error frequency, but the highest rate of stimulus attribution. The dissociation of perceiving and locating tactile stimuli in the blind suggests altered tactile information processing. Neuroplasticity, changes in tactile attention mechanisms as well as the idea that blind persons may employ different strategies for tactile exploration and object localisation are discussed as possible explanations for the results obtained.

  14. User-centered Technologies For Blind Children

    Directory of Open Access Journals (Sweden)

    Jaime Sánchez

    2008-01-01

    Full Text Available The purpose of this paper is to review, summarize, and illustrate research work involving four audio-based games created within a user-centered design methodology through successive usability tasks and evaluations. These games were designed by considering the mental model of blind children and their styles of interaction to perceive and process data and information. The goal of these games was to enhance the cognitive development of spatial structures, memory, haptic perception, mathematical skills, navigation and orientation, and problem solving of blind children. Findings indicate significant improvements in learning and cognition from using audio-based tools specially tailored for the blind. That is, technologies for blind children, carefully tailored through user-centered design approaches, can make a significant contribution to cognitive development of these children. This paper contributes new insight into the design and implementation of audio-based virtual environments to facilitate learning and cognition in blind children.

  15. What It's Like to Be Color Blind

    Science.gov (United States)

    ... a green leaf might look tan or gray. Color Blindness Is Passed Down Color blindness is almost always an inherited (say: in-HER- ... Eye doctors (and some school nurses) test for color blindness by showing a picture made up of different ...

  16. Fracture detection in crystalline rock using ultrasonic reflection techniques: Volume 1

    International Nuclear Information System (INIS)

    Palmer, S.P.

    1982-11-01

    This research was initiated to investigate using ultrasonic seismic reflection techniques to detect fracture discontinuities in a granitic rock. Initial compressional (P) and shear (SH) wave experiments were performed on a 0.9 x 0.9 x 0.3 meter granite slab in an attempt to detect seismic energy reflected from the opposite face of the slab. It was found that processing techniques such as deconvolution and array synthesis could improve the standout of the reflection event. During the summers of 1979 and 1980 SH reflection experiments were performed at a granite quarry near Knowles, California. The purpose of this study was to use SH reflection methods to detect an in situ fracture located one to three meters behind the quarry face. These SH data were later analyzed using methods similar to those applied in the laboratory. Interpretation of the later-arriving events observed in the SH field data as reflections from a steeply-dipping fracture was inconclusive. 41 refs., 43 figs., 7 tabs

  17. What do colour-blind people really see?

    NARCIS (Netherlands)

    Hogervorst, M.A.; Alferdinck, J.W.A.M.

    2008-01-01

    Problem: colour perception of dichromats (colour-blind persons) Background: Various models have been proposed (e. g. Walraven & Alferdinck, 1997; Brettel et al. , 1997) to model reduced colour vision of colour-blind people. It is clear that colour-blind people cannot distinguish certain object

  18. Causes of childhood blindness in Ghana: results from a blind school survey in Upper West Region, Ghana, and review of the literature.

    Science.gov (United States)

    Huh, Grace J; Simon, Judith; Grace Prakalapakorn, S

    2017-06-13

    Data on childhood blindness in Ghana are limited. The objectives of this study were to determine the major causes of childhood blindness and severe visual impairment (SVI) at Wa Methodist School for the Blind in Northern Ghana, and to compare our results to those published from other studies conducted in Ghana. In this retrospective study, data from an eye screening at Wa Methodist School in November 2014 were coded according to the World Health Organization/Prevention of Blindness standardized reporting methodology. Causes of blindness/SVI were categorized anatomically and etiologically, and were compared to previously published studies. Of 190 students screened, the major anatomical causes of blindness/SVI were corneal scar/phthisis bulbi (CS/PB) (n = 28, 15%) and optic atrophy (n = 23, 12%). The major etiological causes of blindness/SVI were unknown (n = 114, 60%). Eighty-three (44%) students became blind before age one year. Of four published blind school surveys conducted in Ghana, CS/PB was the most common anatomical cause of childhood blindness. Over time, the prevalence of CS/PB within blind schools decreased in the north and increased in the south. Measles-associated visual loss decreased from 52% in 1987 to 10% in 2014 at Wa Methodist School. In a blind school in northern Ghana, CS/PB was the major anatomical cause of childhood blindness/SVI. While CS/PB has been the most common anatomical cause of childhood blindness reported in Ghana, there may be regional changes in its prevalence over time. Being able to identify regional differences may guide future public health strategies to target specific causes.

  19. High-volume infiltration analgesia in total knee arthroplasty: a randomized, double-blind, placebo-controlled trial

    DEFF Research Database (Denmark)

    Andersen, L.O.; Husted, H.; Otte, K.S.

    2008-01-01

    with a detailed description of the infiltration technique. METHODS: In a randomized, double-blind, placebo-controlled trial in 12 patients undergoing bilateral knee arthroplasty, saline or high-volume (170 ml) ropivacaine (0.2%) with epinephrine was infiltrated around each knee, with repeated doses administered...

  20. Blindness causes analysis of 1854 hospitalized patients in Xinjiang

    Directory of Open Access Journals (Sweden)

    Tian-Zuo Wang

    2015-01-01

    Full Text Available AIM: To analyze the blindness causes of 1854 cases in our hospital hospitalized patients, and explore the strategy and direction of blindness prevention according to the different treatment efficacy.METHODS: Cluster sampling was used to select from September 2010 to August 2013 in our hospital department of ophthalmology patients 5 473 cases, in which total of 1 854 cases of blind patients, accounting for 33.88% of hospitalized patients. According to the WHO's criteria of blindness. The BCVA enacted RESULTS: In 1 854 cases of blind patients, including 728 people right-eye blinding, 767 people left-eyes blinding, 359 people total blinding, adding up to 2 213 eyes, aged from 60~80 years old were in the majority. The top three diseases resulting blindness were cataract, diabetic retinopathy and glaucoma. In 2 213 blind eyes, the eyes treated were 2 172, of which 1 762 eyes(81.12%were succeeded, 410 eyes(18.88%failed. In the failed cases, the first three diseases were diabetic retinopathy, glaucoma and retinal detachment. CONCLUSION: In recent years, disease etiology of blinding eye has changed, but cataracts, diabetic retinopathy and glaucoma are still high incidence of blindness due, so the treatment of diabetic retinopathy, glaucoma and retinal detachment should be the emphasis for blindness prevention and treatment in the future.

  1. Tactile spatial resolution in blind braille readers.

    Science.gov (United States)

    Van Boven, R W; Hamilton, R H; Kauffman, T; Keenan, J P; Pascual-Leone, A

    2000-06-27

    To determine if blind people have heightened tactile spatial acuity. Recently, studies using magnetic source imaging and somatosensory evoked potentials have shown that the cortical representation of the reading fingers of blind Braille readers is expanded compared to that of fingers of sighted subjects. Furthermore, the visual cortex is activated during certain tactile tasks in blind subjects but not sighted subjects. The authors hypothesized that the expanded cortical representation of fingers used in Braille reading may reflect an enhanced fidelity in the neural transmission of spatial details of a stimulus. If so, the quantitative limit of spatial acuity would be superior in blind people. The authors employed a grating orientation discrimination task in which threshold performance is accounted for by the spatial resolution limits of the neural image evoked by a stimulus. The authors quantified the psychophysical limits of spatial acuity at the middle and index fingers of 15 blind Braille readers and 15 sighted control subjects. The mean grating orientation threshold was significantly (p = 0.03) lower in the blind group (1.04 mm) compared to the sighted group (1.46 mm). The self-reported dominant reading finger in blind subjects had a mean grating orientation threshold of 0.80 mm, which was significantly better than other fingers tested. Thresholds at non-Braille reading fingers in blind subjects averaged 1.12 mm, which were also superior to sighted subjects' performances. Superior tactile spatial acuity in blind Braille readers may represent an adaptive, behavioral correlate of cortical plasticity.

  2. Epidemiology of blindness in children.

    Science.gov (United States)

    Solebo, Ameenat Lola; Teoh, Lucinda; Rahi, Jugnoo

    2017-09-01

    An estimated 14 million of the world's children are blind. A blind child is more likely to live in socioeconomic deprivation, to be more frequently hospitalised during childhood and to die in childhood than a child not living with blindness. This update of a previous review on childhood visual impairment focuses on emerging therapies for children with severe visual disability (severe visual impairment and blindness or SVI/BL).For children in higher income countries, cerebral visual impairment and optic nerve anomalies remain the most common causes of SVI/BL, while retinopathy of prematurity (ROP) and cataract are now the most common avoidable causes. The constellation of causes of childhood blindness in lower income settings is shifting from infective and nutritional corneal opacities and congenital anomalies to more resemble the patterns seen in higher income settings. Improvements in maternal and neonatal health and investment in and maintenance of national ophthalmic care infrastructure are the key to reducing the burden of avoidable blindness. New therapeutic targets are emerging for childhood visual disorders, although the safety and efficacy of novel therapies for diseases such as ROP or retinal dystrophies are not yet clear. Population-based epidemiological research, particularly on cerebral visual impairment and optic nerve hypoplasia, is needed in order to improve understanding of risk factors and to inform and support the development of novel therapies for disorders currently considered 'untreatable'. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Blind topological measurement-based quantum computation.

    Science.gov (United States)

    Morimae, Tomoyuki; Fujii, Keisuke

    2012-01-01

    Blind quantum computation is a novel secure quantum-computing protocol that enables Alice, who does not have sufficient quantum technology at her disposal, to delegate her quantum computation to Bob, who has a fully fledged quantum computer, in such a way that Bob cannot learn anything about Alice's input, output and algorithm. A recent proof-of-principle experiment demonstrating blind quantum computation in an optical system has raised new challenges regarding the scalability of blind quantum computation in realistic noisy conditions. Here we show that fault-tolerant blind quantum computation is possible in a topologically protected manner using the Raussendorf-Harrington-Goyal scheme. The error threshold of our scheme is 4.3 × 10(-3), which is comparable to that (7.5 × 10(-3)) of non-blind topological quantum computation. As the error per gate of the order 10(-3) was already achieved in some experimental systems, our result implies that secure cloud quantum computation is within reach.

  4. [Frequency and causes of blindness and visual impairment in schools for the blind in Yaoundé (Cameroon)].

    Science.gov (United States)

    Noche, Christelle Domngang; Bella, Assumpta Lucienne

    2010-01-01

    To determine the causes of blindness and visual impairment in students attending schools for the blind in Yaounde (Cameroon) and to estimate their frequencies. This study examined all 56 students at three schools for the blind in Yaoundé from September 15 through October 15, 2006. We collected data about their age, sex, medical and surgical history. Visual acuity was measured to determine their vision status according to the World Health Organization categories for blindness and visual impairment. All subjects underwent an ocular examination. Epi Info 3.5.1. was used for the statistical analysis of age, sex, visual acuity, causes of blindness and visual impairment, and etiologies. Fifty six people were examined: 37 men (66.1%) and 19 women (33.9%). Their mean age was 21.57 ± 10.53 years (min-max: 5-49), and 48.2% were in the 10-19 years age group (n = 27). In all, 87.5% were blind, 7.14% severely visually impaired, and 1.78% moderately visually impaired. The main causes of blindness and visual impairment in our sample were corneal disease (32.14%), optic nerve lesions (26.78%), cataract and its surgical complications (19.64%), retinal disorders (10.71%), glaucoma (8.92%, and malformations of the eyeball (1.78%). Their etiologies included congenital cataracts (19.64%), meningitis/fever (8.92%), glaucoma (7.14%), measles (5.35%), ocular trauma (5.35%), albinism (3.57%), Lyell syndrome (1.8%), and alcohol ingestion (1.8%). Etiology was unknown in 46.42%. Fifty per cent of these causes of blindness and visual impairment were treatable and/or preventable. Corneal lesions were the main cause of blindness and visual impairment in our sample. Fifty per cent of the causes found were treatable and/or preventable. Thus, substantial efforts are required to ensure access to better quality specialist ocular care. Furthermore, local authorities should create more centers specialised in the rehabilitation of the visual handicapped.

  5. Deconvolution analysis of 24-h serum cortisol profiles informs the amount and distribution of hydrocortisone replacement therapy.

    Science.gov (United States)

    Peters, Catherine J; Hill, Nathan; Dattani, Mehul T; Charmandari, Evangelia; Matthews, David R; Hindmarsh, Peter C

    2013-03-01

    Hydrocortisone therapy is based on a dosing regimen derived from estimates of cortisol secretion, but little is known of how the dose should be distributed throughout the 24 h. We have used deconvolution analysis of 24-h serum cortisol profiles to determine 24-h cortisol secretion and distribution to inform hydrocortisone dosing schedules in young children and older adults. Twenty four hour serum cortisol profiles from 80 adults (41 men, aged 60-74 years) and 29 children (24 boys, aged 5-9 years) were subject to deconvolution analysis using an 80-min half-life to ascertain total cortisol secretion and distribution throughout the 24-h period. Mean daily cortisol secretion was similar between adults (6.3 mg/m(2) body surface area/day, range 5.1-9.3) and children (8.0 mg/m(2) body surface area/day, range 5.3-12.0). Peak serum cortisol concentration was higher in children compared with adults, whereas nadir serum cortisol concentrations were similar. Timing of the peak serum cortisol concentration was similar (07.05-07.25), whereas that of the nadir concentration occurred later in adults (midnight) compared with children (22.48) (P = 0.003). Children had the highest percentage of cortisol secretion between 06.00 and 12.00 (38.4%), whereas in adults this took place between midnight and 06.00 (45.2%). These observations suggest that the daily hydrocortisone replacement dose should be equivalent on average to 6.3 mg/m(2) body surface area/day in adults and 8.0 mg/m(2) body surface area/day in children. Differences in distribution of the total daily dose between older adults and young children need to be taken into account when using a three or four times per day dosing regimen. © 2012 Blackwell Publishing Ltd.

  6. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  7. Environment and Blindness Situation in Iran

    Directory of Open Access Journals (Sweden)

    Soraya Askari

    2010-04-01

    Full Text Available Objectives: The purpose of this study is to describe the experiences of adults with acquired blindness while performing the daily activities of normal life and to investigated the role of environmental factors in this process. Methods: A qualitative phenomenological method has been designed for this study. A sample of 22 adults with acquired blindness who were blind for more than 5 years of life were purposefully selected and semi-structured in-depth interviews were conducted with them. The interviews were transcribed verbatim, coded and analyzed using van Manen’s method. Results: The five clustered themes that emerged from the interviews included: 1 Products and technology-discusses the benefits and drawbacks of using advanced technology to promote independence, 2 Physical environment-“The streets are like an obstacle course”, 3 Support and relationships-refers to the assistance that blind people receive from family, friends, and society, 4 Attitudes-includes family and social attitudes toward blind people, 5 Services and policies-social security, supportive acts, economic factors, educational problems and providing services. Discusion: Findings identify how the daily living activities of blind people are affected by environmental factors and what those factors are. The results will enable occupational therapists and other health care professionals who are involved with blind people to become more competent during assessment, counseling, teaching, giving support, or other interventions as needed to assist blind people. Recommendations for further research include more studies of this population to identify other challenges over time. This would facilitate long-term goals in the care. Studies that include more diversity in demographic characteristics would provide greater generalization. Some characteristics such as adolescent age group, married and single, ethnicity, and socioeconomic status are particularly important to target.

  8. The Prevention of Blindness-Past, Present and Future

    Institute of Scientific and Technical Information of China (English)

    Akira; Nakajima

    1992-01-01

    Prevention of blindness is the most important aim of ophthalmology. Prevention of blindness is related to many factors. It is related to many factors, such as science and technology, economy and social behavior. There are worldwide activities by WHO, NGOs and other functions to promote the prevention of blindness in the world. More than 90% of blind population lives in developing world. Cataract is the top causes of blindness which is curable. Onchocerciasis is an endemic disease in west Africa and cent...

  9. Semi-blind identification of wideband MIMO channels via stochastic sampling

    OpenAIRE

    Andrieu, Christophe; Piechocki, Robert J.; McGeehan, Joe P.; Armour, Simon M.

    2003-01-01

    In this paper we address the problem of wide-band multiple-input multiple-output (MIMO) channel (multidimensional time invariant FIR filter) identification using Markov chains Monte Carlo methods. Towards this end we develop a novel stochastic sampling technique that produces a sequence of multidimensional channel samples. The method is semi-blind in the sense that it uses a very short training sequence. In such a framework the problem is no longer analytically tractable; hence we resort to s...

  10. Unblinding the dark matter blind spots

    International Nuclear Information System (INIS)

    Han, Tao; Kling, Felix

    2017-01-01

    The dark matter (DM) blind spots in the Minimal Supersymmetric Standard Model (MSSM) refer to the parameter regions where the couplings of the DM particles to the Z-boson or the Higgs boson are almost zero, leading to vanishingly small signals for the DM direct detections. In this paper, we carry out comprehensive analyses for the DM searches under the blind-spot scenarios in MSSM. Guided by the requirement of acceptable DM relic abundance, we explore the complementary coverage for the theory parameters at the LHC, the projection for the future underground DM direct searches, and the indirect searches from the relic DM annihilation into photons and neutrinos. We find that (i) the spin-independent (SI) blind spots may be rescued by the spin-dependent (SD) direct detection in the future underground experiments, and possibly by the indirect DM detections from IceCube and SuperK neutrino experiments; (ii) the detection of gamma rays from Fermi-LAT may not reach the desirable sensitivity for searching for the DM blind-spot regions; (iii) the SUSY searches at the LHC will substantially extend the discovery region for the blind-spot parameters. As a result, the dark matter blind spots thus may be unblinded with the collective efforts in future DM searches.

  11. Deaf-Blind Perspectives, 2000-2001.

    Science.gov (United States)

    Malloy, Peggy, Ed.

    2001-01-01

    These three issues of "Deaf-Blind Perspectives" feature the following articles: (1) "A Group for Students with Usher Syndrome in South Louisiana" (Faye Melancon); (2) "Simply Emily," which discusses a budding friendship between a girl with deaf-blindness and a peer; (3) "Intervener Update" (Peggy Malloy and…

  12. A Hybrid Technique for Blind Separation of Non-Gaussian and Time-Correlated Sources Using a Multicomponent Approach

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk; Yeredor, A.; Gómez-Herrero, G.; Doron, E.

    2008-01-01

    Roč. 19, č. 3 (2008), s. 421-430 ISSN 1045-9227 R&D Projects: GA MŠk 1M0572 Grant - others:GA ČR(CZ) GP102/07/P384 Program:GP Institutional research plan: CEZ:AV0Z10750506 Keywords : blind source separation * independent component analysis Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.726, year: 2008

  13. Multichannel deblurring of digital images

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip; Flusser, Jan

    2011-01-01

    Roč. 47, č. 3 (2011), s. 439-454 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : image restoration * blind deconvolution * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0360217.pdf

  14. Restoration of retinal images with space-variant blur

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Millán, M. S.; Šorel, Michal; Šroubek, Filip

    2014-01-01

    Roč. 19, č. 1 (2014), 016023-1-016023-12 ISSN 1083-3668 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind deconvolution * space-variant restoration * retinal image Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.859, year: 2014 http://library.utia.cas.cz/separaty/2014/ZOI/sorel-0424586.pdf

  15. Finding Objects for Assisting Blind People

    OpenAIRE

    Yi, Chucai; Flores, Roberto W.; Chincha, Ricardo; Tian, YingLi

    2013-01-01

    Computer vision technology has been widely used for blind assistance, such as navigation and wayfinding. However, few camera-based systems are developed for helping blind or visually-impaired people to find daily necessities. In this paper, we propose a prototype system of blind-assistant object finding by camera-based network and matching-based recognition. We collect a dataset of daily necessities and apply Speeded-Up Robust Features (SURF) and Scale Invariant Feature Transform (SIFT) featu...

  16. 42 CFR 435.531 - Determinations of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Determinations of blindness. 435.531 Section 435... ISLANDS, AND AMERICAN SAMOA Categorical Requirements for Eligibility Blindness § 435.531 Determinations of blindness. (a) Except as specified in paragraph (b) of this section, in determining blindness— (1) A...

  17. Occupant satisfaction with two blind control strategies

    DEFF Research Database (Denmark)

    Karlsen, Line Røseth; Heiselberg, Per Kvols; Bryn, Ida

    2015-01-01

    Highlights •Occupant satisfaction with two blind control strategies has been studied. •Control based on cut-off position of slats was more popular than closed slats. •Results from the study are helpful in development of control strategies for blinds. •The results give indications of how blinds...

  18. Glaucoma Blindness at a Tertiary Eye Care Center.

    Science.gov (United States)

    Stone, Jordan S; Muir, Kelly W; Stinnett, Sandra S; Rosdahl, Jullia A

    2015-01-01

    Glaucoma is an important cause of irreversible blindness. This study describes the characteristics of a large, diverse group of glaucoma patients and evaluates associations between demographic and clinical characteristics and blindness. Data were gathered via retrospective chart review of patients (N = 1,454) who were seen between July 2007 and July 2010 by glaucoma service providers at Duke Eye Center. Visual acuity and visual field criteria were used to determine whether patients met the criteria for legal blindness. Descriptive and comparative statistical analyses were performed on the glaucoma patients who were not blind (n = 1,258) and those who were blind (n = 196). A subgroup analysis of only those patients with primary open-angle glaucoma was also performed. In this tertiary care population, 13% (n = 196) of glaucoma patients met criteria for legal blindness, nearly one-half of whom (n = 94) were blind from glaucoma, and another one-third of whom (n = 69) had glaucoma-related blindness. The most common glaucoma diagnosis at all levels of vision was primary open-angle glaucoma. A larger proportion of black patients compared with white patients demonstrated vision loss; the odds ratio (OR) for blindness was 2.25 (95% CI, 1.6-3.2) for black patients compared with white patients. The use of systemic antihypertensive medications was higher among patients who were blind compared with patients who were not blind (OR = 2.1; 95% CI, 1.4-3.1). A subgroup analysis including only patients with primary open-angle glaucoma showed similar results for both black race and use of systemic antihypertensive medications. The relationship between use of systemic antihypertensive medications and blindness was not different between black patients and white patients (interaction P = .268). Data were based on chart review, and associations may be confounded by unmeasured factors. Treated systemic hypertension may be correlated with blindness, and the cause cannot be explained solely

  19. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    Energy Technology Data Exchange (ETDEWEB)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com [Department of Physics, University of Pune, Ganeshkhind, Pune -411007, India and Department of Physics, Fergusson College, Pune- 411004 (India); Patil, B. J.; Bhoraskar, V. N.; Dhole, S. D. [Department of Physics, University of Pune, Ganeshkhind, Pune -411007 (India); Sahare, P. D. [Department of Physics and Astrophysics, University of Delhi, Delhi- 110007 (India)

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  20. 45 CFR 233.70 - Blindness.

    Science.gov (United States)

    2010-10-01

    ...). Such physician is responsible for making the agency's decision that the applicant or recipient does or... XVI of the Social Security Act must: (1) Contain a definition of blindness in terms of ophthalmic measurement. The following definition is recommended: An individual is considered blind if he has central...

  1. Congenital color blindness in young Turkish men.

    Science.gov (United States)

    Citirik, Mehmet; Acaroglu, Golge; Batman, Cosar; Zilelioglu, Orhan

    2005-04-01

    We investigated a healthy population of men from different regions of Turkey for the presence of congenital red-green color blindness. Using Ishihara pseudoisochromatic plates, 941 healthy men from the Turkish army were tested for congenital red-green color blindness. The prevalence of red-green color blindness was 7.33 +/- 0.98% (5.10% protans and 2.23% deutans). These ratios were higher than other reported samples from Mediterranean Europe. Higher percentages of color blindness were found in regions with a lower education level and more consanguineous marriages.

  2. Concurrent and lagged impacts of an anomalously warm year on autotrophic and heterotrophic components of soil respiration: a deconvolution analysis.

    Science.gov (United States)

    Zhou, Xuhui; Luo, Yiqi; Gao, Chao; Verburg, Paul S J; Arnone, John A; Darrouzet-Nardi, Anthony; Schimel, David S

    2010-07-01

    *Partitioning soil respiration into autotrophic (R(A)) and heterotrophic (R(H)) components is critical for understanding their differential responses to climate warming. *Here, we used a deconvolution analysis to partition soil respiration in a pulse warming experiment. We first conducted a sensitivity analysis to determine which parameters can be identified by soil respiration data. A Markov chain Monte Carlo technique was then used to optimize those identifiable parameters in a terrestrial ecosystem model. Finally, the optimized parameters were employed to quantify R(A) and R(H) in a forward analysis. *Our results displayed that more than one-half of parameters were constrained by daily soil respiration data. The optimized model simulation showed that warming stimulated R(H) and had little effect on R(A) in the first 2 months, but decreased both R(H) and R(A) during the remainder of the treatment and post-treatment years. Clipping of above-ground biomass stimulated the warming effect on R(H) but not on R(A). Overall, warming decreased R(A) and R(H) significantly, by 28.9% and 24.9%, respectively, during the treatment year and by 27.3% and 33.3%, respectively, during the post-treatment year, largely as a result of decreased canopy greenness and biomass. *Lagged effects of climate anomalies on soil respiration and its components are important in assessing terrestrial carbon cycle feedbacks to climate warming.

  3. Psychological and social adjustment to blindness: understanding from two groups of blind people in Ilorin, Nigeria.

    Science.gov (United States)

    Tunde-Ayinmode, Mosunmola F; Akande, Tanimola M; Ademola-Popoola, Dupe S

    2011-01-01

    Blindness can cause psychosocial distress leading to maladjustment if not mitigated. Maladjustment is a secondary burden that further reduces quality of life of the blind. Adjustment is often personalized and depends on nature and quality of prevailing psychosocial support and rehabilitation opportunities. This study was aimed at identifying the pattern of psychosocial adjustment in a group of relatively secluded and under-reached totally blind people in Ilorin, thus sensitizing eye doctors to psychosocial morbidity and care in the blind. A cross-sectional descriptive study using 20-item Self-Reporting Questionnaire (SRQ) and a pro forma designed by the authors to assess the psychosocial problems and risk factors in some blind people in Ilorin metropolis. The study revealed that most of the blind people were reasonably adjusted in key areas of social interaction, marriage, and family. Majority were considered to be poorly adjusted in the areas of education, vocational training, employment, and mobility. Many were also considered to be psychologically maladjusted based on the high rate of probable psychological disorder of 51%, as determined by SRQ. Factors identified as risk factors of probable psychological disorder were poor educational background and the presence of another medical disorder. Most of the blind had no access to formal education or rehabilitation system, which may have contributed to their maladjustment in the domains identified. Although their prevailing psychosocial situation would have been better prevented yet, real opportunity still exists to help this group of people in the area of social and physical rehabilitation, meeting medical needs, preventive psychiatry, preventive ophthalmology, and community health. This will require the joint efforts of medical community, government and nongovernment organizations to provide the framework for delivery of these services directly to the communities.

  4. 10 CFR 26.168 - Blind performance testing.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Blind performance testing. 26.168 Section 26.168 Energy... and Human Services § 26.168 Blind performance testing. (a) Each licensee and other entity shall submit blind performance test samples to the HHS-certified laboratory. (1) During the initial 90-day period of...

  5. Causes of blindness among hospital outpatients in Ecuador.

    Science.gov (United States)

    Cass, Helene; Landers, John; Benitez, Paul

    2006-03-01

    There is a lack of published information on the causes of blindness in Ecuador and the Latin American region in general. This study is designed to enumerate the proportions of ocular conditions contributing to blindness in an outpatient population of an ophthalmology hospital in the coastal region of Ecuador. All cases presenting to an ophthalmology outpatient clinic over a 3-week period during September 2004 were reviewed (n = 802). Visual acuity was measured using a Snellen acuity chart and those who met the criteria for blindness were included in the study (n = 118). Blindness was defined under the World Health Organization protocol as visual acuity of glaucoma (15%). Among those considered to have bilateral blindness (n = 30), refraction was the most common cause (37%), followed by cataract (23%) and glaucoma (17%). The major causes of blindness found in this study reflected those in estimated data for the region. More studies are needed to improve the quality and quantity of epidemiological data on blindness in Ecuador and Latin America. Many obstacles to successful implementation of prevention of blindness programmes in South America still need to be overcome.

  6. A Smart Infrared Microcontroller-Based Blind Guidance System

    Directory of Open Access Journals (Sweden)

    Amjed S. Al-Fahoum

    2013-01-01

    Full Text Available Blindness is a state of lacking the visual perception due to physiological or neurological factors. The partial blindness represents the lack of integration in the growth of the optic nerve or visual centre of the eye, and total blindness is the full absence of the visual light perception. In this work, a simple, cheap, friendly user, smart blind guidance system is designed and implemented to improve the mobility of both blind and visually impaired people in a specific area. The proposed work includes a wearable equipment consists of head hat and mini hand stick to help the blind person to navigate alone safely and to avoid any obstacles that may be encountered, whether fixed or mobile, to prevent any possible accident. The main component of this system is the infrared sensor which is used to scan a predetermined area around blind by emitting-reflecting waves. The reflected signals received from the barrier objects are used as inputs to PIC microcontroller. The microcontroller is then used to determine the direction and distance of the objects around the blind. It also controls the peripheral components that alert the user about obstacle's shape, material, and direction. The implemented system is cheap, fast, and easy to use and an innovative affordable solution to blind and visually impaired people in third world countries.

  7. An Overwhelming Desire to Be Blind: Similarities and Differences between Body Integrity Identity Disorder and the Wish for Blindness.

    Science.gov (United States)

    Gutschke, Katja; Stirn, Aglaja; Kasten, Erich

    2017-01-01

    The urge to be permanently blind is an extremely rare mental health disturbance. The underlying cause of this desire has not been determined yet, and it is uncertain whether the wish for blindness is a condition that can be included in the context of body integrity identity disorder, a condition where people feel an overwhelming need to be disabled, in many cases by amputation of a limb or through paralysis. The aim of this study is to test the hypothesis that people with a desire for blindness suffer from a greater degree of visual stress in daily activities than people in a healthy visual control group. We created a Likert scale questionnaire to measure visual stress, covering a wide range of everyday situations. The wish for blindness is extremely rare and worldwide only 5 people with an urge to be blind were found to participate in the study (4 female, 1 male). In addition, a control group of 35 (28 female, 7 male) visually healthy people was investigated. Questions addressing issues that may be experienced by participants with a desire to be blind were integrated into the questionnaire. The hypothesis that people with a desire for blindness suffer from a significantly higher visual overload in activities of daily living than visually healthy subjects was confirmed; the significance of visual stress between these groups was p < 0.01. In addition, an interview with the 5 affected participants supported the causal role of visual overload. The desire for blindness seems to originate from visual overload caused by either ophthalmologic or organic brain disturbances. In addition, psychological reasons such as certain personal character traits may play an active role in developing, maintaining, and reinforcing one's desire to be blind.

  8. An Overwhelming Desire to Be Blind: Similarities and Differences between Body Integrity Identity Disorder and the Wish for Blindness

    Directory of Open Access Journals (Sweden)

    Katja Gutschke

    2017-03-01

    Full Text Available Background: The urge to be permanently blind is an extremely rare mental health disturbance. The underlying cause of this desire has not been determined yet, and it is uncertain whether the wish for blindness is a condition that can be included in the context of body integrity identity disorder, a condition where people feel an overwhelming need to be disabled, in many cases by amputation of a limb or through paralysis. Objective: The aim of this study is to test the hypothesis that people with a desire for blindness suffer from a greater degree of visual stress in daily activities than people in a healthy visual control group. Method: We created a Likert scale questionnaire to measure visual stress, covering a wide range of everyday situations. The wish for blindness is extremely rare and worldwide only 5 people with an urge to be blind were found to participate in the study (4 female, 1 male. In addition, a control group of 35 (28 female, 7 male visually healthy people was investigated. Questions addressing issues that may be experienced by participants with a desire to be blind were integrated into the questionnaire. Results: The hypothesis that people with a desire for blindness suffer from a significantly higher visual overload in activities of daily living than visually healthy subjects was confirmed; the significance of visual stress between these groups was p < 0.01. In addition, an interview with the 5 affected participants supported the causal role of visual overload. Conclusions: The desire for blindness seems to originate from visual overload caused by either ophthalmologic or organic brain disturbances. In addition, psychological reasons such as certain personal character traits may play an active role in developing, maintaining, and reinforcing one’s desire to be blind.

  9. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    International Nuclear Information System (INIS)

    Jimenez-Ruiz, A.; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R.

    2017-01-01

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  10. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez-Ruiz, A., E-mail: ailjimrui@alum.us.es; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R., E-mail: pradogotor@us.es [University of Seville, The Department of Physical Chemistry (Spain)

    2017-01-15

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  11. Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems

    KAUST Repository

    Zaib, Alam; Al-Naffouri, Tareq Y.

    2014-01-01

    This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.

  12. Applications of two-photon fluorescence microscopy in deep-tissue imaging

    Science.gov (United States)

    Dong, Chen-Yuan; Yu, Betty; Hsu, Lily L.; Kaplan, Peter D.; Blankschstein, D.; Langer, Robert; So, Peter T. C.

    2000-07-01

    Based on the non-linear excitation of fluorescence molecules, two-photon fluorescence microscopy has become a significant new tool for biological imaging. The point-like excitation characteristic of this technique enhances image quality by the virtual elimination of off-focal fluorescence. Furthermore, sample photodamage is greatly reduced because fluorescence excitation is limited to the focal region. For deep tissue imaging, two-photon microscopy has the additional benefit in the greatly improved imaging depth penetration. Since the near- infrared laser sources used in two-photon microscopy scatter less than their UV/glue-green counterparts, in-depth imaging of highly scattering specimen can be greatly improved. In this work, we will present data characterizing both the imaging characteristics (point-spread-functions) and tissue samples (skin) images using this novel technology. In particular, we will demonstrate how blind deconvolution can be used further improve two-photon image quality and how this technique can be used to study mechanisms of chemically-enhanced, transdermal drug delivery.

  13. Blinded by Irrelevance: Pure Irrelevance Induced "Blindness"

    Science.gov (United States)

    Eitam, Baruch; Yeshurun, Yaffa; Hassan, Kinneret

    2013-01-01

    To what degree does our representation of the immediate world depend solely on its relevance to what we are currently doing? We examined whether relevance per se can cause "blindness," even when there is no resource limitation. In a novel paradigm, people looked at a colored circle surrounded by a differently colored ring--the task relevance of…

  14. Plaque index between blind and deaf children after dental health education

    Directory of Open Access Journals (Sweden)

    Cynthia Carissa

    2011-03-01

    Full Text Available Background: Difficulty in mobility and motor coordination could affect the health at teeth and mouth. Dental health education of the blind and deaf children differs according their limitation. Blind and deaf children need a particular guidance in dental health education to promote oral hygiene as normal children do. Purpose: The objective of this study was to observe the difference of plaque index between blind and deaf children before and after dental health education. Methods: This research used purposive sampling technique. Twenty-three blind children were taken as samples from SLB-A Negeri Bandung and 31 deaf children from SLB-B Cicendo Bandung. The data were then collected through plaque index examination using modified patient hygiene performance (PHP test. Results: The result descriptively showed that plaque index average value of 23 blind children before dental health education was 3.0725 and after, was 1.7970. On the other hand, the plaque index average of deaf children before dental health education was 2.7474 and after was 1.5. Conclusion: It is concluded that plaque index of deaf children is better than blind children before and after dental health education.Latar belakang: Kesulitan dalam pergerakan dan koordinasi motorik akan memengaruhi kesehatan gigi dan mulut. Pendidikan kesehatan gigi dan mulut anak buta dan tuli akan berbeda tergantung tingkat kekurangan mereka. Anak tunanetra dan anak tunarungu membutuhkan pendidikan khusus berupa pendidikan kesehatan gigi untuk meningkatkan kebersihan gigi dan mulut serupa dengan anak normal. Tujuan: Untuk mengetahui perbedaan indeks plak antara anak-anak buta dan tuli sebelum dan sesudah pendidikan kesehatan gigi. Metode: Penelitian ini menggunakan teknik purposive sampling. Dua puluh tiga anak tunanetra diambil sebagai sampel dari SLB-A Negeri Bandung dan 31 anak tunarungu dari SLB-B Cicendo Bandung. Data tersebut kemudian dikumpulkan melalui pemeriksaan indeks plak menggunakan indeks

  15. Comparison of Quality of Life and Social Skills between Students with Visual Problems (Blind and Partially Blind) and Normal Students

    OpenAIRE

    Fereshteh Kordestani; Azam Daneshfar; Davood Roustaee

    2014-01-01

    This study aimed to compare the quality of life and social skills between students who are visually impaired (blind and partially blind) and normal students. The population consisted of all students with visual problems (blind and partially blind) and normal students in secondary schools in Tehran in the academic year 2013-2014. Using a multi-stage random sampling method, 40 students were selected from each group. The SF-36s quality of life questionnaire and Foster and Inderbitzen social skil...

  16. Reduced taste sensitivity in congenital blindness

    DEFF Research Database (Denmark)

    Gagnon, Lea; Kupers, Ron; Ptito, Maurice

    2013-01-01

    behavioral results showed that compared with the normal sighted, blind subjects have increased thresholds for taste detection and taste identification. This finding is at odds with the superior performance of congenitally blind subjects in several tactile, auditory and olfactory tasks. Our psychometric data...... thresholds of the 5 basic tastants in 13 congenitally blind and 13 sighted control subjects. Participants also answered several eating habits questionnaires, including the Food Neophobia Scale, the Food Variety Seeking Tendency Scale, the Intuitive Eating Scale, and the Body Awareness Questionnaire. Our...

  17. Causes of blindness and visual impairment among students in integrated schools for the blind in Nepal.

    Science.gov (United States)

    Shrestha, Jyoti Baba; Gnyawali, Subodh; Upadhyay, Madan Prasad

    2012-12-01

    To identify the causes of blindness and visual impairment among students in integrated schools for the blind in Nepal. A total of 778 students from all 67 integrated schools for the blind in Nepal were examined using the World Health Organization/Prevention of Blindness Eye Examination Record for Children with Blindness and Low Vision during the study period of 3 years. Among 831 students enrolled in the schools, 778 (93.6%) participated in the study. Mean age of students examined was 13.7 years, and the male to female ratio was 1.4:1. Among the students examined, 85.9% were blind, 10% had severe visual impairment and 4.1% were visually impaired. The cornea (22.8%) was the most common anatomical site of visual impairment, its most frequent cause being vitamin A deficiency, followed by the retina (18.4%) and lens (17.6%). Hereditary and childhood factors were responsible for visual loss in 27.9% and 22.0% of students, respectively. Etiology could not be determined in 46% of cases. Overall, 40.9% of students had avoidable causes of visual loss. Vision could be improved to a level better than 6/60 in 3.6% of students refracted. More than one third of students were visually impaired for potentially avoidable reasons, indicating lack of eye health awareness and eye care services in the community. The cause of visual impairment remained unknown in a large number of students, which indicates the need for introduction of modern diagnostic tools.

  18. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Födisch, P., E-mail: p.foedisch@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Wohsmann, J. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Lange, B. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Schönherr, J. [Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Enghardt, W. [OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, PF 41, 01307 Dresden (Germany); Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology, Bautzner Landstr. 400, 01328 Dresden (Germany); German Cancer Consortium (DKTK) and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Kaever, P. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany)

    2016-09-11

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  19. [Rudolph Tegner: The blind from Marrakech (1949-1950)].

    Science.gov (United States)

    Norn, M; Permin, H

    1999-01-01

    The Danish sculptor and painter Rudolph Tegner (1873-1950) has built his own Museum in Dronningmolle, where his sculptures enrich the unique landscape. His last and incomplete plaster on a simple, raw wooden scaffold sculpture The Blind from Marrakech (Fig. 1) show five persons moan about, carrying a dead body. All persons are missing their arms. Tegner had a number of years earlier been in Marrakech and had watched a funeral procession, where blind beggars had carried a dead old woman raised high above the bearers on a kind of pall. In a small version of the statue cast, later in bronze from 1963, showed 15 bearers, on both sides of the bier (Fig. 2 & 3). Nine and 15 bearers are looking up, two right in front, and the rest are looking down. Totally blind people can not see the light but can see up to the divine Heaven. Some blind have kept the gleam. The confusion with the eye direction shows that they really are blind. However 10 of the 15 blind people had hollow in the eye (excenteratio orbitae) in contrast to the dead woman. The dead woman had been the blinds' mistress. The last work The Blind in Marrakech may also be the despair of the artist.

  20. College Students Who Are Deaf-Blind. Practice Perspectives--Highlighting Information on Deaf-Blindness. Number 7

    Science.gov (United States)

    Arndt, Katrina

    2011-01-01

    Imagine being in college and being deaf-blind. What opportunities might you have? What types of challenges would you face? This publication describes a study that begins to answer these questions. During the study, 11 college students with deaf-blindness were interviewed about their college experiences. They were like most college students in many…