WorldWideScience

Sample records for blind deconvolution method

  1. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  2. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  3. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  4. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  5. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  6. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  7. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  8. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  9. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  10. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  11. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  12. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  13. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  14. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  16. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  17. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  18. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  19. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  20. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  1. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  2. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  3. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  4. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  5. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  6. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  7. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  8. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  9. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  10. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  11. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  12. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  13. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  14. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  15. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  16. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  17. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring

    Directory of Open Access Journals (Sweden)

    Naixue Xiong

    2017-01-01

    Full Text Available Single-image blind deblurring for imaging sensors in the Internet of Things (IoT is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  18. A HOS-based blind deconvolution algorithm for the improvement of time resolution of mixed phase low SNR seismic data

    International Nuclear Information System (INIS)

    Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M

    2009-01-01

    A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six

  19. Blind deconvolution of seismograms regularized via minimum support

    International Nuclear Information System (INIS)

    Royer, A A; Bostock, M G; Haber, E

    2012-01-01

    The separation of earthquake source signature and propagation effects (the Earth’s ‘Green’s function’) that encode a seismogram is a challenging problem in seismology. The task of separating these two effects is called blind deconvolution. By considering seismograms of multiple earthquakes from similar locations recorded at a given station and that therefore share the same Green’s function, we may write a linear relation in the time domain u i (t)*s j (t) − u j (t)*s i (t) = 0, where u i (t) is the seismogram for the ith source and s j (t) is the jth unknown source. The symbol * represents the convolution operator. From two or more seismograms, we obtain a homogeneous linear system where the unknowns are the sources. This system is subject to a scaling constraint to deliver a non-trivial solution. Since source durations are not known a priori and must be determined, we augment our system by introducing the source durations as unknowns and we solve the combined system (sources and source durations) using separation of variables. Our solution is derived using direct linear inversion to recover the sources and Newton’s method to recover source durations. This method is tested using two sets of synthetic seismograms created by convolution of (i) random Gaussian source-time functions and (ii) band-limited sources with a simplified Green’s function and signal to noise levels up to 10% with encouraging results. (paper)

  20. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  1. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  2. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  3. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  4. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  5. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  6. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  7. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  8. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  9. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  10. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  11. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  12. Blind phase retrieval for aberrated linear shift-invariant imaging systems

    International Nuclear Information System (INIS)

    Yu, Rotha P; Paganin, David M

    2010-01-01

    We develop a means to reconstruct an input complex coherent scalar wavefield, given a through focal series (TFS) of three intensity images output from a two-dimensional (2D) linear shift-invariant optical imaging system with unknown aberrations. This blind phase retrieval technique unites two methods, namely (i) TFS phase retrieval and (ii) iterative blind deconvolution. The efficacy of our blind phase retrieval procedure has been demonstrated using simulated data, for a variety of Poisson noise levels.

  13. Application of blind deconvolution with crest factor for recovery of original rolling element bearing defect signals

    International Nuclear Information System (INIS)

    Son, J. D.; Yang, B. S.; Tan, A. C. C.; Mathew, J.

    2004-01-01

    Many machine failures are not detected well in advance due to the masking of background noise and attenuation of the source signal through the transmission mediums. Advanced signal processing techniques using adaptive filters and higher order statistics have been attempted to extract the source signal from the measured data at the machine surface. In this paper, blind deconvolution using the Eigenvector Algorithm (EVA) technique is used to recover a damaged bearing signal using only the measured signal at the machine surface. A damaged bearing signal corrupted by noise with varying signal-to-noise (s/n) was used to determine the effectiveness of the technique in detecting an incipient signal and the optimum choice of filter length. The results show that the technique is effective in detecting the source signal with an s/n ratio as low as 0.21, but requires a relatively large filter length

  14. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  15. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  16. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI.

    Science.gov (United States)

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn

    2016-03-01

    One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.

  17. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  18. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  19. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  20. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  1. Semi-blind sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  2. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  3. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  4. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  5. Analysis of blind identification methods for estimation of kinetic parameters in dynamic medical imaging

    Science.gov (United States)

    Riabkov, Dmitri

    Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of

  6. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  7. Stochastic Blind Motion Deblurring

    KAUST Repository

    Xiao, Lei

    2015-05-13

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can therefore only be obtained with the help of prior information in the form of (often non-convex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with PSNR values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms.

  8. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  9. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  10. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  11. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  12. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  13. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  14. Probabilistic blind deconvolution of non-stationary sources

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals...

  15. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  16. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  17. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  18. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  19. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  20. Multichannel blind iterative image restoration

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Flusser, Jan

    2003-01-01

    Roč. 12, č. 9 (2003), s. 1094-1106 ISSN 1057-7149 R&D Projects: GA ČR GA102/00/1711 Institutional research plan: CEZ:AV0Z1075907 Keywords : conjugate gradient * half-quadratic regularization * multichannel blind deconvolution Subject RIV: BD - Theory of Information Impact factor: 2.642, year: 2003 http://library.utia.cas.cz/prace/20030104.pdf

  1. Estimation of Input Function from Dynamic PET Brain Data Using Bayesian Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 12, č. 4 (2015), s. 1273-1287 ISSN 1820-0214 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * Variational Bayes method * dynamic PET * input function * deconvolution Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2015 http://library.utia.cas.cz/separaty/2015/AS/tichy-0450509.pdf

  2. Fruit fly optimization based least square support vector regression for blind image restoration

    Science.gov (United States)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  3. Bayesian Blind Separation and Deconvolution of Dynamic Image Sequences Using Sparsity Priors

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 34, č. 1 (2015), s. 258-266 ISSN 0278-0062 R&D Projects: GA ČR GA13-29225S Keywords : Functional imaging * Blind source separation * Computer-aided detection and diagnosis * Probabilistic and statistical methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.756, year: 2015 http://library.utia.cas.cz/separaty/2014/AS/tichy-0431090.pdf

  4. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  5. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  6. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  7. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  8. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  9. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  10. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  11. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  12. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  15. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  16. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  17. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  18. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  19. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  20. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  1. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  2. Modified Particle Swarm Optimization for Blind Deconvolution and Identification of Multichannel FIR Filters

    Directory of Open Access Journals (Sweden)

    Khanagha Ali

    2010-01-01

    Full Text Available Blind identification of MIMO FIR systems has widely received attentions in various fields of wireless data communications. Here, we use Particle Swarm Optimization (PSO as the update mechanism of the well-known inverse filtering approach and we show its good performance compared to original method. Specially, the proposed method is shown to be more robust against lower SNR scenarios or in cases with smaller lengths of available data records. Also, a modified version of PSO is presented which further improves the robustness and preciseness of PSO algorithm. However the most important promise of the modified version is its drastically faster convergence compared to standard implementation of PSO.

  3. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  4. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  5. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  6. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  7. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  8. Reporting methods of blinding in randomized trials assessing nonpharmacological treatments.

    Directory of Open Access Journals (Sweden)

    Isabelle Boutron

    2007-02-01

    Full Text Available BACKGROUND: Blinding is a cornerstone of treatment evaluation. Blinding is more difficult to obtain in trials assessing nonpharmacological treatment and frequently relies on "creative" (nonstandard methods. The purpose of this study was to systematically describe the strategies used to obtain blinding in a sample of randomized controlled trials of nonpharmacological treatment. METHODS AND FINDINGS: We systematically searched in Medline and the Cochrane Methodology Register for randomized controlled trials (RCTs assessing nonpharmacological treatment with blinding, published during 2004 in high-impact-factor journals. Data were extracted using a standardized extraction form. We identified 145 articles, with the method of blinding described in 123 of the reports. Methods of blinding of participants and/or health care providers and/or other caregivers concerned mainly use of sham procedures such as simulation of surgical procedures, similar attention-control interventions, or a placebo with a different mode of administration for rehabilitation or psychotherapy. Trials assessing devices reported various placebo interventions such as use of sham prosthesis, identical apparatus (e.g., identical but inactivated machine or use of activated machine with a barrier to block the treatment, or simulation of using a device. Blinding participants to the study hypothesis was also an important method of blinding. The methods reported for blinding outcome assessors relied mainly on centralized assessment of paraclinical examinations, clinical examinations (i.e., use of video, audiotape, photography, or adjudications of clinical events. CONCLUSIONS: This study classifies blinding methods and provides a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment, and provides information for readers assessing the quality of results of such trials.

  9. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  10. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  11. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  12. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  13. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  14. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  15. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  16. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  17. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  18. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  19. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  20. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  1. Use of new spectral analysis methods in gamma spectra deconvolution

    International Nuclear Information System (INIS)

    Pinault, J.L.

    1991-01-01

    A general deconvolution method applicable to X and gamma ray spectrometry is proposed. Using new spectral analysis methods, it is applied to an actual case: the accurate on-line analysis of three elements (Ca, Si, Fe) in a cement plant using neutron capture gamma rays. Neutrons are provided by a low activity (5 μg) 252 Cf source; the detector is a BGO 3 in.x8 in. scintillator. The principle of the methods rests on the Fourier transform of the spectrum. The search for peaks and determination of peak areas are worked out in the Fourier representation, which enables separation of background and peaks and very efficiently discriminates peaks, or elements represented by several peaks. First the spectrum is transformed so that in the new representation the full width at half maximum (FWHM) is independent of energy. Thus, the spectrum is arranged symmetrically and transformed into the Fourier representation. The latter is multiplied by a function in order to transform original Gaussian into Lorentzian peaks. An autoregressive filter is calculated, leading to a characteristic polynomial whose complex roots represent both the location and the width of each peak, provided that the absolute value is lower than unit. The amplitude of each component (the area of each peak or the sum of areas of peaks characterizing an element) is fitted by the weighted least squares method, taking into account that errors in spectra are independent and follow a Poisson law. Very accurate results are obtained, which would be hard to achieve by other methods. The DECO FORTRAN code has been developed for compatible PC microcomputers. Some features of the code are given. (orig.)

  2. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  3. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  4. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  5. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  6. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  7. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  8. The research progress of perforating gun inner wall blind hole machining method

    Science.gov (United States)

    Wang, Zhe; Shen, Hongbing

    2018-04-01

    Blind hole processing method has been a concerned technical problem in oil, electronics, aviation and other fields. This paper introduces different methods for blind hole machining, focus on machining method for perforating gun inner wall blind hole processing. Besides, the advantages and disadvantages of different methods are also discussed, and the development trend of blind hole processing were introduced significantly.

  9. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  10. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  11. A simple method for the deconvolution of 134 Cs/137 Cs peaks in gamma-ray scintillation spectrometry

    International Nuclear Information System (INIS)

    Darko, E.O.; Osae, E.K.; Schandorf, C.

    1998-01-01

    A simple method for the deconvolution of 134 Cs / 137 Cs peaks in a given mixture of 134 Cs and 137 Cs using Nal(TI) gamma-ray scintillation spectrometry is described. In this method the 795 keV energy of 134 Cs is used as a reference peak to calculate the activity of the 137 Cs directly from the measured peaks. Certified reference materials were measured using the method and compared with a high resolution gamma-ray spectrometry measurements. The results showed good agreement with the certified values. The method is very simple and does not need any complicated mathematics and computer programme to de- convolute the overlapping 604.7 keV and 661.6 keV peaks of 134 Cs and 137 Cs respectively. (author). 14 refs.; 1 tab., 2 figs

  12. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  13. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  14. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  15. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  16. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  17. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  18. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  19. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  20. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  1. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  2. Study of the lifetime of the TL peaks of quartz: comparison of the deconvolution using the first order kinetic with the initial rise method

    International Nuclear Information System (INIS)

    RATOVONJANAHARY, A.J.F.

    2005-01-01

    Quartz is a thermoluminescent material which can be used for dating and/or for dosimetry. This material has been used since 60s for dating samples like pottery, flint, etc., but the method is still subject to some improvement. One of the problem of thermoluminescence dating is the estimation of the lifetime of the ''used peak'' . The application of the glow-curve deconvolution (GCD) technique for the analysis of a composite thermoluminescence glow curve into its individual glow peaks has been applied widely since the 80s. Many functions describing a single glow peak have been proposed. For analysing quartz behaviour, thermoluminescence glow-curve deconvolution (GCD) functions are compared for first order of kinetic. The free parameters of the GCD functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be obtained experimentally. The activation energy (E) is the additional free parameter. The lifetime (τ) of each glow peak, which is an important factor for dating, is calculated from these three parameters. For ''used'' ''peak'' lifetime analysis, GCD results are compared to those from initial rise method (IRM). Results vary fairly from method to method. [fr

  3. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  4. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  5. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, Miori, E-mail: miori@mx6.et.tiki.ne.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Tsuji, Yoshihisa, E-mail: y.tsuji@extra.ocn.ne.jp [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Iwasaki, Toshiroh [Department of Veterinary Internal Medicine, Tokyo University of Agriculture and Technology, Saiwai-cho, 3-5-8, Fuchu 183-8509 (Japan); Miyake, Yoh-Ichi [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Yazumi, Shujiro [Digestive Disease Center, Kitano Hospital, 2-4-20 Ougi-machi, Kita-ku, Osaka 530-8480 (Japan); Chiba, Tsutomu [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Yamada, Kazutaka, E-mail: kyamada@obihiro.ac.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan)

    2011-01-15

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg{sup -1}) at 5.0 ml s{sup -1}. The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 {+-} 1.6) was shorter than the appearance time (14.1 {+-} 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  6. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    International Nuclear Information System (INIS)

    Kishimoto, Miori; Tsuji, Yoshihisa; Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja; Iwasaki, Toshiroh; Miyake, Yoh-Ichi; Yazumi, Shujiro; Chiba, Tsutomu; Yamada, Kazutaka

    2011-01-01

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg -1 ) at 5.0 ml s -1 . The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 ± 1.6) was shorter than the appearance time (14.1 ± 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  7. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  8. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  9. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  10. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  11. Methods of blinding in reports of randomized controlled trials assessing pharmacologic treatments: a systematic review.

    Directory of Open Access Journals (Sweden)

    Isabelle Boutron

    2006-10-01

    Full Text Available BACKGROUND: Blinding is a cornerstone of therapeutic evaluation because lack of blinding can bias treatment effect estimates. An inventory of the blinding methods would help trialists conduct high-quality clinical trials and readers appraise the quality of results of published trials. We aimed to systematically classify and describe methods to establish and maintain blinding of patients and health care providers and methods to obtain blinding of outcome assessors in randomized controlled trials of pharmacologic treatments. METHODS AND FINDINGS: We undertook a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments with blinding published in 2004 in high impact-factor journals from Medline and the Cochrane Methodology Register. We used a standardized data collection form to extract data. The blinding methods were classified according to whether they primarily (1 established blinding of patients or health care providers, (2 maintained the blinding of patients or health care providers, and (3 obtained blinding of assessors of the main outcomes. We identified 819 articles, with 472 (58% describing the method of blinding. Methods to establish blinding of patients and/or health care providers concerned mainly treatments provided in identical form, specific methods to mask some characteristics of the treatments (e.g., added flavor or opaque coverage, or use of double dummy procedures or simulation of an injection. Methods to avoid unblinding of patients and/or health care providers involved use of active placebo, centralized assessment of side effects, patients informed only in part about the potential side effects of each treatment, centralized adapted dosage, or provision of sham results of complementary investigations. The methods reported for blinding outcome assessors mainly relied on a centralized assessment of complementary investigations, clinical examination (i.e., use of video, audiotape, or

  12. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  13. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  14. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  15. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  16. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  17. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  18. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  19. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    Energy Technology Data Exchange (ETDEWEB)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com [Department of Physics, University of Pune, Ganeshkhind, Pune -411007, India and Department of Physics, Fergusson College, Pune- 411004 (India); Patil, B. J.; Bhoraskar, V. N.; Dhole, S. D. [Department of Physics, University of Pune, Ganeshkhind, Pune -411007 (India); Sahare, P. D. [Department of Physics and Astrophysics, University of Delhi, Delhi- 110007 (India)

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  20. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  1. A deconvolution method for deriving the transit time spectrum for ultrasound propagation through cancellous bone replica models.

    Science.gov (United States)

    Langton, Christian M; Wille, Marie-Luise; Flegg, Mark B

    2014-04-01

    The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland-Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.

  2. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    Science.gov (United States)

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  4. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  5. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  6. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  7. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  8. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Science.gov (United States)

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  9. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  10. Blind Multiuser Detection by Kurtosis Maximization for Asynchronous Multirate DS/CDMA Systems

    Directory of Open Access Journals (Sweden)

    Peng Chun-Hsien

    2006-01-01

    Full Text Available Chi et al. proposed a fast kurtosis maximization algorithm (FKMA for blind equalization/deconvolution of multiple-input multiple-output (MIMO linear time-invariant systems. This algorithm has been applied to blind multiuser detection of single-rate direct-sequence/code-division multiple-access (DS/CDMA systems and blind source separation (or independent component analysis. In this paper, the FKMA is further applied to blind multiuser detection for multirate DS/CDMA systems. The ideas are to properly formulate discrete-time MIMO signal models by converting real multirate users into single-rate virtual users, followed by the use of FKMA for extraction of virtual users' data sequences associated with the desired user, and recovery of the data sequence of the desired user from estimated virtual users' data sequences. Assuming that all the users' spreading sequences are given a priori, two multirate blind multiuser detection algorithms (with either a single receive antenna or multiple antennas, which also enjoy the merits of superexponential convergence rate and guaranteed convergence of the FKMA, are proposed in the paper, one based on a convolutional MIMO signal model and the other based on an instantaneous MIMO signal model. Some simulation results are then presented to demonstrate their effectiveness and to provide a performance comparison with some existing algorithms.

  11. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  12. Distributed capillary adiabatic tissue homogeneity model in parametric multi-channel blind AIF estimation using DCE-MRI

    Czech Academy of Sciences Publication Activity Database

    Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, M.; Standara, M.; Starčuk jr., Zenon; Taxt, T.

    2016-01-01

    Roč. 75, č. 3 (2016), s. 1355-1365 ISSN 0740-3194 R&D Projects: GA ČR GAP102/12/2380; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : dynamic contrast-enhanced magnetic resonance imaging * multi-channel blind deconvolution * arterial input function * impulse residue function * renal cell carcinoma Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.924, year: 2016

  13. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  14. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  15. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  16. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  17. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  18. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  19. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  20. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  1. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Increasing the darkfield contrast-to-noise ratio using a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging.

    Science.gov (United States)

    Weber, Thomas; Pelzer, Georg; Bayer, Florian; Horn, Florian; Rieger, Jens; Ritter, André; Zang, Andrea; Durst, Jürgen; Anton, Gisela; Michel, Thilo

    2013-07-29

    A novel information retrieval algorithm for X-ray grating-based phase-contrast imaging based on the deconvolution of the object and the reference phase stepping curve (PSC) as proposed by Modregger et al. was investigated in this paper. We applied the method for the first time on data obtained with a polychromatic spectrum and compared the results to those, received by applying the commonly used method, based on a Fourier analysis. We confirmed the expectation, that both methods deliver the same results for the absorption and the differential phase image. For the darkfield image, a mean contrast-to-noise ratio (CNR) increase by a factor of 1.17 using the new method was found. Furthermore, the dose saving potential was estimated for the deconvolution method experimentally. It is found, that for the conventional method a dose which is higher by a factor of 1.66 is needed to obtain a similar CNR value compared to the novel method. A further analysis of the data revealed, that the improvement in CNR and dose efficiency is due to the superior background noise properties of the deconvolution method, but at the cost of comparability between measurements at different applied dose values, as the mean value becomes dependent on the photon statistics used.

  3. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  4. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  5. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  6. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  7. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  8. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  9. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  10. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  11. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  12. The blind leading the blind: use and misuse of blinding in randomized controlled trials.

    Science.gov (United States)

    Miller, Larry E; Stewart, Morgan E

    2011-03-01

    The use of blinding strengthens the credibility of randomized controlled trials (RCTs) by minimizing bias. However, there is confusion surrounding the definition of blinding as well as the terms single, double, and triple blind. It has been suggested that these terms should be discontinued due to their broad misinterpretation. We recommend that, instead of abandoning the use of these terms, explicit definitions of blinding should be adopted. We address herein the concept of blinding, propose standard definitions for the consistent use of these terms, and detail when different types of blinding should be utilized. Standardizing the definition of blinding and utilizing proper blinding methods will improve the quality and clarity of reporting in RCTs. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  14. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  15. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  16. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  17. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  18. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  19. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  20. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  1. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  2. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  3. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  4. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  5. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  6. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  7. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  8. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    International Nuclear Information System (INIS)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Gibb, Andrew G.; Halpern, Mark; Marsden, Gaelen; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; France, Kevin; Gundersen, Joshua O.; Hughes, David H.; Martin, Peter G.; Olmi, Luca

    2011-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12 CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting

  9. The deconvolution of Doppler-broadened positron annihilation measurements using fast Fourier transforms and power spectral analysis

    International Nuclear Information System (INIS)

    Schaffer, J.P.; Shaughnessy, E.J.; Jones, P.L.

    1984-01-01

    A deconvolution procedure which corrects Doppler-broadened positron annihilation spectra for instrument resolution is described. The method employs fast Fourier transforms, is model independent, and does not require iteration. The mathematical difficulties associated with the incorrectly posed first order Fredholm integral equation are overcome by using power spectral analysis to select a limited number of low frequency Fourier coefficients. The FFT/power spectrum method is then demonstrated for an irradiated high purity single crystal sapphire sample. (orig.)

  10. Integration of singularity and zonality methods for prospectivity map of blind mineralization

    Directory of Open Access Journals (Sweden)

    samaneh safari

    2016-12-01

    Full Text Available Singularity based on fractal and multifractal is a technique for detection of depletion and enrichment for geochemical exploration, while the index of vertical geochemical zonality (Vz of Pb.Zn/Cu.Ag is a practical method for exploration of blind porphyry copper mineralization. In this study, these methods are combined for recognition, delineation, and enrichment of Vz in Jebal- Barez in the south of Iran. The studied area is located in the Shar-E-Babak–Bam ore field in the southern part of the Central Iranian volcano–plutonic magmatic arc. The region has a semiarid climate, mountainous topography, and poor vegetation cover. Seven hundreds samples of stream sedimentary were taken from the region. Geochemical data subset represent a total drainage basin area. Samples are analyzed for Cu, Zn, Ag, Pb, Au, W, As, Hg, Ba, Bi by atomic absorption method. Prospectivity map for blind mineralization is represented in this area. The results are in agreement with previous studies which have been focused in this region. Kerver is detected as the main blind mineralization in Jebal- Barz which had been previously intersected by drilled borehole for exploration purposes. In this research, it has been demonstrated that employing the singularity of geochemical zonality anomalies method, as opposed to using singularity of elements, improves mapping of mineral prospectivity.

  11. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  12. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  13. A simple method to improve the spatial uniformity of venetian-blind photomultiplier tubes

    International Nuclear Information System (INIS)

    Santos, J.M.F. dos; Veloso, J.F.C.A.; Morgado, R.E.

    1996-01-01

    An improvement in the uniformity of venetian-blind photomultiplier tubes has been achieved by reducing the voltage difference between the first and second dynodes. The method has been applied to a gas proportional scintillation counter (GPSC) instrumented with a venetian-blind photomultiplier (PMT). When exposed to a 20-mm collimated 5.9-keV x-ray beam, an overall improvement in energy resolution for the GPSC/PMT combination from 20% to 11.5% was achieved. An alternative method that reduces the photocathode-to-first-dynode voltage was less effective and resulted in a severe degradation of detector energy resolution

  14. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  15. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  16. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  17. Using Key Informant Method to Determine the Prevalence and Causes of Childhood Blindness in South-Eastern Nigeria.

    Science.gov (United States)

    Aghaji, Ada E; Ezegwui, Ifeoma R; Shiweobi, Jude O; Mamah, Cyril C; Okoloagu, Mary N; Onwasigwe, Ernest N

    2017-12-01

    To determine the prevalence and causes of childhood blindness in an underserved community in south-eastern Nigeria using the key informant method. This was a descriptive cross-sectional study. Key informants (KI) appointed by their respective communities received 1-day training on identification of blind children in their communities. Two weeks later, the research team visited the agreed sites within the community and examined the identified children. The World Health Organization eye examination record for blind children was used for data collection. Data entry and analysis were done with the Statistical Package for Social Sciences (SPSS) version 17.0. Fifteen blind or severely visually impaired children (age range 3 months to 15 years) were identified in this community; nine of these were brought by the KIs. The prevalence of childhood blindness/severe visual impairment (BL/SVI) was 0.12 per 1000 children. By anatomical classification, operable cataract in 6 (40.0%) was the leading cause of BL/SVI in the series; followed by optic nerve lesions (atrophy/hypoplasia) in 3 (20.0%). The etiology of BL/SVI is unknown for the majority of the children (66.7%). It was presumed hereditary in four children (26.7%). Sixty percent of the blindness was judged avoidable. Only three children (20.0%) were enrolled in the Special Education Centre for the Blind. The prevalence of childhood BL/SVI in our study population is low but over half of the blindness is avoidable. There may be a significant backlog of operable childhood cataract in south-eastern Nigeria. The KI method is a practical method for case finding of blind children in rural communities.

  18. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  19. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    International Nuclear Information System (INIS)

    Floberg, J M; Holden, J E

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)

  20. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  1. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    Directory of Open Access Journals (Sweden)

    Dayong Zhou

    2008-12-01

    Full Text Available Tsatsanis and Xu have applied the constrained minimum output variance (CMOV principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  2. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  3. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  4. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  5. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  6. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  7. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  8. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  9. Gradient method for blind chaotic signal separation based on proliferation exponent

    International Nuclear Information System (INIS)

    Lü Shan-Xiang; Wang Zhao-Shan; Hu Zhi-Hui; Feng Jiu-Chao

    2014-01-01

    A new method to perform blind separation of chaotic signals is articulated in this paper, which takes advantage of the underlying features in the phase space for identifying various chaotic sources. Without incorporating any prior information about the source equations, the proposed algorithm can not only separate the mixed signals in just a few iterations, but also outperforms the fast independent component analysis (FastICA) method when noise contamination is considerable. (general)

  10. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  11. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  12. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  13. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  14. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments

    Directory of Open Access Journals (Sweden)

    Saeed Daneshmand

    2016-10-01

    Full Text Available The use of antenna arrays in Global Navigation Satellite System (GNSS applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  15. Evaluation of blind signal separation methods

    NARCIS (Netherlands)

    Schobben, D.W.E.; Torkkola, K.; Smaragdis, P.

    1999-01-01

    Recently many new Blind Signal Separation BSS algorithms have been introduced Authors evaluate the performance of their algorithms in various ways Among these are speech recognition rates plots of separated signals plots of cascaded mixingunmixing impulse responses and signal to noise ratios Clearly

  16. Prevalence and causes of severe visual impairment and blindness among children in the lorestan province of iran, using the key informant method.

    Science.gov (United States)

    Razavi, Hessom; Kuper, Hannah; Rezvan, Farhad; Amelie, Khatere; Mahboobi-Pur, Hassan; Oladi, Mohammad Reza; Muhit, Mohammad; Hashemi, Hassan

    2010-03-01

    To estimate the prevalence and causes of severe visual impairment and blindness among children in Lorestan province of Iran, and to assess the feasibility of the Key Informant Method in this setting. Potential cases were identified using the Key Informant Method, in 3 counties of Lorestan province during June through August 2008, and referred for examination. Causes of severe visual impairment/blindness were determined and categorized using standard World Health Organization methods. Of 123 children referred for examination, 27 children were confirmed to have severe visual impairment/blindness or blindness. The median age was11 years (interquartile range 6-13), and 59% were girls. After adjusting for non-attenders, the estimated prevalence of severe visual impairment/blindness was 0.04% (0.03-0.05). The main site of abnormality was retina (44%), followed by disorders of the whole eye (33%). The majority of causes had a hereditary etiology (70%), which was associated with a family history of blindness (P = 0.002). Potentially avoidable causes of severe visual impairment/blindness were found in 14 children (52%). Almost all children with severe visual impairment/blindness had a history of parental consanguinity (93%). Our findings suggest a moderate prevalence of childhood blindness in the Lorestan province of Iran, a high proportion of which may be avoidable, given improved access to ophthalmic and genetic counselling services in rural areas. The Key Informant Method is feasible in Iran; future research is discussed.

  17. Blinded trials taken to the test

    DEFF Research Database (Denmark)

    Hróbjartsson, A; Forfang, E; Haahr, M T

    2007-01-01

    Blinding can reduce bias in randomized clinical trials, but blinding procedures may be unsuccessful. Our aim was to assess how often randomized clinical trials test the success of blinding, the methods involved and how often blinding is reported as being successful....

  18. Perception of blindness and blinding eye conditions in rural communities.

    Science.gov (United States)

    Ashaye, Adeyinka; Ajuwon, Ademola Johnson; Adeoti, Caroline

    2006-01-01

    PURPOSE: The purpose of this qualitative study was to explore the causes and management of blindness and blinding eye conditions as perceived by rural dwellers of two Yoruba communities in Oyo State, Nigeria. METHODS: Four focus group discussions were conducted among residents of Iddo and Isale Oyo, two rural Yoruba communities in Oyo State, Nigeria. Participants consisted of sighted, those who were partially or totally blind and community leaders. Ten patent medicine sellers and 12 traditional healers were also interviewed on their perception of the causes and management of blindness in their communities. FINDINGS: Blindness was perceived as an increasing problem among the communities. Multiple factors were perceived to cause blindness, including germs, onchocerciasis and supernatural forces. Traditional healers believed that blindness could be cured, with many claiming that they had previously cured blindness in the past. However, all agreed that patience was an important requirement for the cure of blindness. The patent medicine sellers' reports were similar to those of the traditional healers. The barriers to use of orthodox medicine were mainly fear, misconception and perceived high costs of care. There was a consensus of opinion among group discussants and informants that there are severe social and economic consequences of blindness, including not been able to see and assess the quality of what the sufferer eats, perpetual sadness, loss of sleep and dependence on other persons for daily activities. CONCLUSION: Local beliefs associated with causation, symptoms and management of blindness and blinding eye conditions among rural Yoruba communities identified have provided a bridge for understanding local perspectives and basis for implementing appropriate primary eye care programs. PMID:16775910

  19. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  20. Analytic study of the Tadoma method: language abilities of three deaf-blind subjects.

    Science.gov (United States)

    Chomsky, C

    1986-09-01

    This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.

  1. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  2. An improved image non-blind image deblurring method based on FoEs

    Science.gov (United States)

    Zhu, Qidan; Sun, Lei

    2013-03-01

    Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.

  3. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  4. POSTERIOR SEGMENT CAUSES OF BLINDNESS AMONG CHILDREN IN BLIND SCHOOLS

    Directory of Open Access Journals (Sweden)

    Sandhya

    2015-09-01

    Full Text Available BACKGROUND: It is estimated that there are 1.4 million irreversibly blind children in the world out of which 1 million are in Asia alone. India has the highest number of blind children than any other country. Nearly 70% of the childhood blindness is avoidable. There i s paucity of data available on the causes of childhood blindness. This study focuses on the posterior segment causes of blindness among children attending blind schools in 3 adjacent districts of Andhra Pradesh. MATERIAL & METHODS: This is a cross sectiona l study conducted among 204 blind children aged 6 - 16 years age. Detailed eye examination was done by the same investigator to avoid bias. Posterior segment examination was done using a direct and/or indirect ophthalmoscope after dilating pupil wherever nec essary. The standard WHO/PBL for blindness and low vision examination protocol was used to categorize the causes of blindness. A major anatomical site and underlying cause was selected for each child. The study was carried out during July 2014 to June 2015 . The results were analyzed using MS excel software and Epi - info 7 software version statistical software. RESULTS: Majority of the children was found to be aged 13 - 16 years (45.1% and males (63.7%. Family history of blindness was noted in 26.0% and consa nguinity was reported in 29.9% cases. A majority of them were belonged to fulfill WHO grade of blindness (73.0% and in majority of the cases, the onset of blindness was since birth (83.7%. The etiology of blindness was unknown in majority of cases (57.4% while hereditary causes constituted 25.4% cases. Posterior segment causes were responsible in 33.3% cases with retina being the most commonly involved anatomical site (19.1% followed by optic nerve (14.2%. CONCLUSIONS: There is a need for mandatory oph thalmic evaluation, refraction and assessment of low vision prior to admission into blind schools with periodic evaluation every 2 - 3 years

  5. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  6. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  7. Višekanalna slijepa dekonvolucija slike zasnovana na inovacijama

    OpenAIRE

    Kopriva, Ivica; Seršić, Damir

    2011-01-01

    The linear mixture model (LMM) has recently been used for multi-channel representation of a blurred image. This enables use of multivariate data analysis methods such as independent component analysis (ICA) to solve blind image deconvolution as an instantaneous blind source separation (BSS) requiring no a priori knowledge about the size and origin of the blurring kernel. However, there remains a serious weakness of this approach: statistical dependence between hidden variables in the LMM. The...

  8. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  9. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  10. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  11. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  12. LLL/DOR seismic conservatism of operating plants project. Interm report on Task II.1.3: soil-structure interaction. Deconvolution of the June 7, 1975, Ferndale Earthquake at the Humboldt Bay Power Plant

    International Nuclear Information System (INIS)

    Maslenikov, O.R.; Smith, P.D.

    1978-01-01

    The Ferndale Earthquake of June 7, 1975, provided a unique opportunity to study the accuracy of seismic soil-structure interaction methods used in the nuclear industry because, other than this event, there have been no cases of significant earthquakes for which moderate motions of nuclear plants have been recorded. Future studies are planned which will evaluate the soil-structure interaction methodology further, using increasingly complex methods as required. The first step in this task was to perform deconvolution and soil-structure interaction analyses for the effects of the Ferndale earthquake at the Humboldt Bay Power Plant site. The deconvolution analyses of bedrock motions performed are compared as well as additional studies on analytical sensitivity

  13. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  14. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    Science.gov (United States)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  15. Postural control in blind subjects

    Directory of Open Access Journals (Sweden)

    Antonio Vinicius Soares

    2011-12-01

    Full Text Available Objective: To analyze postural control in acquired and congenitally blind adults. Methods: A total of 40 visually impaired adults participated in the research, divided into 2 groups, 20 with acquired blindness and 20 with congenital blindness - 21 males and 19 females, mean age 35.8 ± 10.8. The Brazilian version of Berg Balance Scale and the motor domain of functional independence measure were utilized. Results: On Berg Balance Scale the mean for acquired blindness was 54.0 ± 2.4 and 54.4 ± 2.5 for congenitally blind subjects; on functional independence measure the mean for acquired blind group was 87.1 ± 4.8 and 87.3 ± 2.3 for congenitally blind group. Conclusion: Based upon the scale used the results suggest the ability to control posture can be developed by compensatory mechanisms and it is not affected by visual loss in congenitally and acquired blindness.

  16. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  17. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    Science.gov (United States)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  18. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding

    DEFF Research Database (Denmark)

    Hróbjartsson, A; Forfang, E; Haahr, M T

    2007-01-01

    Blinding can reduce bias in randomized clinical trials, but blinding procedures may be unsuccessful. Our aim was to assess how often randomized clinical trials test the success of blinding, the methods involved and how often blinding is reported as being successful....

  19. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  20. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  1. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  2. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  3. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  5. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  6. Future trends in global blindness

    Directory of Open Access Journals (Sweden)

    Serge Resnikoff

    2012-01-01

    Full Text Available The objective of this review is to discuss the available data on the prevalence and causes of global blindness, and some of the associated trends and limitations seen. A literature search was conducted using the terms "global AND blindness" and "global AND vision AND impairment", resulting in seven appropriate articles for this review. Since 1990 the estimate of global prevalence of blindness has gradually decreased when considering the best corrected visual acuity definition: 0.71% in 1990, 0.59% in 2002, and 0.55% in 2010, corresponding to a 0.73% reduction per year over the 2002-2010 period. Significant limitations were found in the comparability between the global estimates in prevalence or causes of blindness or visual impairment. These limitations arise from various factors such as uncertainties about the true cause of the impairment, the use of different definitions and methods, and the absence of data from a number of geographical areas, leading to various extrapolation methods, which in turn seriously limit comparability. Seminal to this discussion on limitations in the comparability of studies and data, is that blindness has historically been defined using best corrected visual acuity.

  7. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  8. Environment and Blindness Situation in Iran

    Directory of Open Access Journals (Sweden)

    Soraya Askari

    2010-04-01

    Full Text Available Objectives: The purpose of this study is to describe the experiences of adults with acquired blindness while performing the daily activities of normal life and to investigated the role of environmental factors in this process. Methods: A qualitative phenomenological method has been designed for this study. A sample of 22 adults with acquired blindness who were blind for more than 5 years of life were purposefully selected and semi-structured in-depth interviews were conducted with them. The interviews were transcribed verbatim, coded and analyzed using van Manen’s method. Results: The five clustered themes that emerged from the interviews included: 1 Products and technology-discusses the benefits and drawbacks of using advanced technology to promote independence, 2 Physical environment-“The streets are like an obstacle course”, 3 Support and relationships-refers to the assistance that blind people receive from family, friends, and society, 4 Attitudes-includes family and social attitudes toward blind people, 5 Services and policies-social security, supportive acts, economic factors, educational problems and providing services. Discusion: Findings identify how the daily living activities of blind people are affected by environmental factors and what those factors are. The results will enable occupational therapists and other health care professionals who are involved with blind people to become more competent during assessment, counseling, teaching, giving support, or other interventions as needed to assist blind people. Recommendations for further research include more studies of this population to identify other challenges over time. This would facilitate long-term goals in the care. Studies that include more diversity in demographic characteristics would provide greater generalization. Some characteristics such as adolescent age group, married and single, ethnicity, and socioeconomic status are particularly important to target.

  9. A synchrotron X-ray diffraction deconvolution method for the measurement of residual stress in thermal barrier coatings as a function of depth.

    Science.gov (United States)

    Li, C; Jacques, S D M; Chen, Y; Daisenberger, D; Xiao, P; Markocsan, N; Nylen, P; Cernik, R J

    2016-12-01

    The average residual stress distribution as a function of depth in an air plasma-sprayed yttria stabilized zirconia top coat used in thermal barrier coating (TBC) systems was measured using synchrotron radiation X-ray diffraction in reflection geometry on station I15 at Diamond Light Source, UK, employing a series of incidence angles. The stress values were calculated from data deconvoluted from diffraction patterns collected at increasing depths. The stress was found to be compressive through the thickness of the TBC and a fluctuation in the trend of the stress profile was indicated in some samples. Typically this fluctuation was observed to increase from the surface to the middle of the coating, decrease a little and then increase again towards the interface. The stress at the interface region was observed to be around 300 MPa, which agrees well with the reported values. The trend of the observed residual stress was found to be related to the crack distribution in the samples, in particular a large crack propagating from the middle of the coating. The method shows promise for the development of a nondestructive test for as-manufactured samples.

  10. The sensory construction of dreams and nightmare frequency in congenitally blind and late blind individuals

    DEFF Research Database (Denmark)

    Meaidi, Amani; Jennum, Poul; Ptito, Maurice

    2014-01-01

    and anxiety levels. RESULTS: All blind participants had fewer visual dream impressions compared to SC participants. In LB participants, duration of blindness was negatively correlated with duration, clarity, and color content of visual dream impressions. CB participants reported more auditory, tactile......OBJECTIVES: We aimed to assess dream content in groups of congenitally blind (CB), late blind (LB), and age- and sex-matched sighted control (SC) participants. METHODS: We conducted an observational study of 11 CB, 14 LB, and 25 SC participants and collected dream reports over a 4-week period......, gustatory, and olfactory dream components compared to SC participants. In contrast, LB participants only reported more tactile dream impressions. Blind and SC participants did not differ with respect to emotional and thematic dream content. However, CB participants reported more aggressive interactions...

  11. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    International Nuclear Information System (INIS)

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix; Voogt, Pim de

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually

  12. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    Energy Technology Data Exchange (ETDEWEB)

    Bade, Richard [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Causanilles, Ana; Emke, Erik [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Voogt, Pim de, E-mail: w.p.devoogt@uva.nl [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam (Netherlands)

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually.

  13. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  14. Multiple geophysical methods examining neotectonic blind structures in the Maradona valley, Central Precordillera (Argentina)

    Science.gov (United States)

    Lara, Gabriela; Klinger, Federico Lince; Perucca, Laura; Rojo, Guillermo; Vargas, Nicolás; Leiva, Flavia

    2017-08-01

    A high-resolution superficial geophysical study was carried out in an area of the retroarc region of the Andes mountains, located in the southwest of San Juan Province (31°45‧ S, 68°50‧ W), Central Precordillera of Argentina. The main objectives of this study were to confirm the presence of blind neotectonic structures and characterize them by observing variations in magnetic susceptibility, density and p-wave velocities. Geological evidence demonstrates the existence of a neotectonic fault scarps affecting Quaternary alluvial deposits in eastern piedmont of de Las Osamentas range, in addition to direct observation of the cinematic of this feature in several natural exposures. The Maradona valley is characterized by the imbricated eastern-vergence Maradona Fault System that uplifts Neogene sedimentary rocks (Albarracín Formation) over Quaternary (Late Pleistocene-Holocene) alluvial deposits. The combined application of different geophysical methods has allowed the characterization of a blind fault geometry also identified on a natural exposure. The magnetic data added to the gravimetric model, and its integration with a seismic profile clearly shows the existence of an anomalous zone, interpreted as uplifted blocks of Miocene sedimentary rocks of Formation Albarracín displaced over Quaternary deposits. The application and development of different geophysical methods, together with geological studies allow to significantly improving the knowledge of an area affected by Quaternary tectonic activity. Finally, this multidisciplinary study, applied in active blind structures is very relevant for future seismic hazard analysis on areas located very close to populated centers.

  15. Blinding in randomized control trials: the enigma unraveled.

    Directory of Open Access Journals (Sweden)

    Vartika Saxena

    2016-03-01

    Full Text Available The search for new treatments and testing of new ideas begins in the laboratory and then established in clinical research settings. Studies addressing the same therapeutic problem may produce conflicting results hence Randomised Clinical Trial is regarded as the most valid method for assessing the benefits and harms of healthcare interventions. The next challenge face by the medical community is the validity of such trials as theses tend to deviate from the truth because of various biases. For the avoidance of the same it has been suggested that the validity or quality of primary trials should be assessed under blind conditions. Thus blinding, is a crucial method for reducing bias in randomized clinical trials. Blinding can be defined as withholding information about the assigned interventions from people involved in the trial who may potentially be prejudiced by this knowledge. In this article we make an effort to define blinding, explain its chronology, hierarchy and discuss methods of blinding, its assessment, its possibility, un-blinding and finally the latest guidelines.

  16. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  17. Blind source separation analysis of PET dynamic data: a simple method with exciting MR-PET applications

    Energy Technology Data Exchange (ETDEWEB)

    Oros-Peusquens, Ana-Maria; Silva, Nuno da [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Weiss, Carolin [Department of Neurosurgery, University Hospital Cologne, 50924 Cologne (Germany); Stoffels, Gabrielle; Herzog, Hans; Langen, Karl J [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Shah, N Jon [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Jülich-Aachen Research Alliance (JARA) - Section JARA-Brain RWTH Aachen University, 52074 Aachen (Germany)

    2014-07-29

    Denoising of dynamic PET data improves parameter imaging by PET and is gaining momentum. This contribution describes an analysis of dynamic PET data by blind source separation methods and comparison of the results with MR-based brain properties.

  18. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Brett [Institute of Biomedical Studies, Baylor University, Waco, TX 76798 (United States); Neumann, Elizabeth K. [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States); Stow, Sarah M.; May, Jody C.; McLean, John A. [Department of Chemistry, Vanderbilt University, Nashville, TN 37235 (United States); Vanderbilt Institute of Chemical Biology, Nashville, TN 37235 (United States); Vanderbilt Institute for Integrative Biosystems Research and Education, Nashville, TN 37235 (United States); Center for Innovative Technology, Nashville, TN 37235 (United States); Solouki, Touradj, E-mail: Touradj_Solouki@baylor.edu [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States)

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.{sub 8} Å{sup 2}, 295.{sub 1} Å{sup 2}, 296.{sub 8} Å{sup 2}, and 300.{sub 1} Å{sup 2}; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  19. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    International Nuclear Information System (INIS)

    Harper, Brett; Neumann, Elizabeth K.; Stow, Sarah M.; May, Jody C.; McLean, John A.; Solouki, Touradj

    2016-01-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288._8 Å"2, 295._1 Å"2, 296._8 Å"2, and 300._1 Å"2; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  20. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    International Nuclear Information System (INIS)

    Jimenez-Ruiz, A.; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R.

    2017-01-01

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  1. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez-Ruiz, A., E-mail: ailjimrui@alum.us.es; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R., E-mail: pradogotor@us.es [University of Seville, The Department of Physical Chemistry (Spain)

    2017-01-15

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  2. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  3. Prevalence and causes of low vision and blindness in the blind population supported by the Yazd Welfare Organization

    Directory of Open Access Journals (Sweden)

    F Ezoddini - Ardakani

    2006-01-01

    Full Text Available Introduction: In 1995, the World Health Organization (WHO estimated that there were 37.1 million blind people worldwide. It has subsequently been reported that 110 million people have severely impaired vision, hence are at great risk of becoming blind. Watkins predicted an annual increase of about two million blind worldwide. This study was designed to investigate the causes of blindness and low vision in the blind population supported by the welfare organization of Yazd, Iran. Methods: This clinical descriptive cross-sectional study was done from January to September, 2003. In total, 109 blind patients supported by the welfare organization were included in this study. All data was collected by standard methods using questionnaire, interview and specific examination. The data included; demographic characteristics, clinical states, ophthalmic examination, family history and the available prenatal information. The data were analyzed by SPSS software and chi square test. Results: Of total patients, 73 cases were male (67% and 36 were female (33%. The median age was 24.6 years (range one month to 60 years. More than half of the cases (53.2% could be diagnosed in children less than one year of age. In total, 79 patients (88.1% were legally blind of which 23 cases (29.1% had no light perception (NLP. The most common causes of blindness were retinitis pigmentosa (32.1% followed by ocular dysgenesis (16.5%. Conclusion: Our data showed that more than half of the blindness cases occur during the first year of life. The most common cause of blindness was retinitis pigmentosa followed by ocular dysgenesis, cataract and glaucoma, respectively.

  4. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    Science.gov (United States)

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  5. A Dynamic Tap Allocation for Concurrent CMA-DD Equalizers

    Directory of Open Access Journals (Sweden)

    Trindade DiegovonBM

    2010-01-01

    Full Text Available Abstract This paper proposes a dynamic tap allocation for the concurrent CMA-DD equalizer as a low complexity solution for the blind channel deconvolution problem. The number of taps is a crucial factor which affects the performance and the complexity of most adaptive equalizers. Generally an equalizer requires a large number of taps in order to cope with long delays in the channel multipath profile. Simulations show that the proposed new blind equalizer is able to solve the blind channel deconvolution problem with a specified and reduced number of active taps. As a result, it minimizes the output excess mean square error due to inactive taps during and after the equalizer convergence and the hardware complexity as well.

  6. Uniocular blindness in Delta State Teaching Hospital, Oghara, Nigeria

    African Journals Online (AJOL)

    Background: Uniocular blindness causes loss of binocular single vision. People with uniocular blindness are potentially at risk of developing binocular blindness. Aim: To determine the prevalence rate, causes and risk factors for uniocular blindness in a teaching hospital in southern Nigeria over a one-year period. Methods: ...

  7. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Födisch, P., E-mail: p.foedisch@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Wohsmann, J. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Lange, B. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Schönherr, J. [Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Enghardt, W. [OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, PF 41, 01307 Dresden (Germany); Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology, Bautzner Landstr. 400, 01328 Dresden (Germany); German Cancer Consortium (DKTK) and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Kaever, P. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany)

    2016-09-11

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  8. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  9. Blind Deconvolution for Jump-Preserving Curve Estimation

    Directory of Open Access Journals (Sweden)

    Xingfang Huang

    2010-01-01

    when recovering the signals. Our procedure is based on three local linear kernel estimates of the regression function, constructed from observations in a left-side, a right-side, and a two-side neighborhood of a given point, respectively. The estimated function at the given point is then defined by one of the three estimates with the smallest weighted residual sum of squares. To better remove the noise and blur, this estimate can also be updated iteratively. Performance of this procedure is investigated by both simulation and real data examples, from which it can be seen that our procedure performs well in various cases.

  10. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  11. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    Science.gov (United States)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  12. Interferometric inversion for passive imaging and navigation

    Science.gov (United States)

    2017-05-01

    7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Massachusetts Institute of Technology 77 Massachusetts Avenue, Room 2-247 Cambridge , MA 01239...Final performance report for AFOSR program manager Arje Nachman PI: Laurent Demanet Department of Mathematics Massachusetts Institute of Technology...performance of those methods can be mathematically understood and certified. Blind deconvolution is a major open question in signal processing, with

  13. The deconvolution of sputter-etching surface concentration measurements to determine impurity depth profiles

    International Nuclear Information System (INIS)

    Carter, G.; Katardjiev, I.V.; Nobes, M.J.

    1989-01-01

    The quasi-linear partial differential continuity equations that describe the evolution of the depth profiles and surface concentrations of marker atoms in kinematically equivalent systems undergoing sputtering, ion collection and atomic mixing are solved using the method of characteristics. It is shown how atomic mixing probabilities can be deduced from measurements of ion collection depth profiles with increasing ion fluence, and how this information can be used to predict surface concentration evolution. Even with this information, however, it is shown that it is not possible to deconvolute directly the surface concentration measurements to provide initial depth profiles, except when only ion collection and sputtering from the surface layer alone occur. It is demonstrated further that optimal recovery of initial concentration depth profiles could be ensured if the concentration-measuring analytical probe preferentially sampled depths near and at the maximum depth of bombardment-induced perturbations. (author)

  14. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  15. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    Science.gov (United States)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  16. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  17. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images.

    Science.gov (United States)

    Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong

    2017-01-01

    Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a

  18. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  19. Comparison of Quality of Life and Social Skills between Students with Visual Problems (Blind and Partially Blind) and Normal Students

    OpenAIRE

    Fereshteh Kordestani; Azam Daneshfar; Davood Roustaee

    2014-01-01

    This study aimed to compare the quality of life and social skills between students who are visually impaired (blind and partially blind) and normal students. The population consisted of all students with visual problems (blind and partially blind) and normal students in secondary schools in Tehran in the academic year 2013-2014. Using a multi-stage random sampling method, 40 students were selected from each group. The SF-36s quality of life questionnaire and Foster and Inderbitzen social skil...

  20. Blinding in randomized clinical trials: imposed impartiality

    DEFF Research Database (Denmark)

    Hróbjartsson, A; Boutron, I

    2011-01-01

    Blinding, or "masking," is a crucial method for reducing bias in randomized clinical trials. In this paper, we review important methodological aspects of blinding, emphasizing terminology, reporting, bias mechanisms, empirical evidence, and the risk of unblinding. Theoretical considerations...

  1. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Kuo Men

    2017-12-01

    Full Text Available BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC. It requires exact delineation of the nasopharynx gross tumor volume (GTVnx, the metastatic lymph node gross tumor volume (GTVnd, the clinical target volume (CTV, and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy

  2. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    Science.gov (United States)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  3. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    International Nuclear Information System (INIS)

    Geloni, G.; Saldin, E.L.; Schneidmiller, E.A.; Yurkov, M.V.

    2004-01-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function

  4. Measurement and deconvolution of detector response time for short HPM pulses: Part 1, Microwave diodes

    International Nuclear Information System (INIS)

    Bolton, P.R.

    1987-06-01

    A technique is described for measuring and deconvolving response times of microwave diode detection systems in order to generate corrected input signals typical of an infinite detection rate. The method has been applied to cases of 2.86 GHz ultra-short HPM pulse detection where pulse rise time is comparable to that of the detector; whereas, the duration of a few nanoseconds is significantly longer. Results are specified in terms of the enhancement of equivalent deconvolved input voltages for given observed voltages. The convolution integral imposes the constraint of linear detector response to input power levels. This is physically equivalent to the conservation of integrated pulse energy in the deconvolution process. The applicable dynamic range of a microwave diode is therefore limited to a smaller signal region as determined by its calibration

  5. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  6. Convolutive Blind Source Separation Methods

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Larsen, Jan; Kjems, Ulrik

    2008-01-01

    During the past decades, much attention has been given to the separation of mixed sources, in particular for the blind case where both the sources and the mixing process are unknown and only recordings of the mixtures are available. In several situations it is desirable to recover all sources from...... the recorded mixtures, or at least to segregate a particular source. Furthermore, it may be useful to identify the mixing process itself to reveal information about the physical mixing system. In some simple mixing models each recording consists of a sum of differently weighted source signals. However, in many...... real-world applications, such as in acoustics, the mixing process is more complex. In such systems, the mixtures are weighted and delayed, and each source contributes to the sum with multiple delays corresponding to the multiple paths by which an acoustic signal propagates to a microphone...

  7. Analysis of low-pass filters for approximate deconvolution closure modelling in one-dimensional decaying Burgers turbulence

    Science.gov (United States)

    San, O.

    2016-01-01

    The idea of spatial filtering is central in approximate deconvolution large-eddy simulation (AD-LES) of turbulent flows. The need for low-pass filters naturally arises in the approximate deconvolution approach which is based solely on mathematical approximations by employing repeated filtering operators. Two families of low-pass spatial filters are studied in this paper: the Butterworth filters and the Padé filters. With a selection of various filtering parameters, variants of the AD-LES are systematically applied to the decaying Burgers turbulence problem, which is a standard prototype for more complex turbulent flows. Comparing with the direct numerical simulations, it is shown that all forms of the AD-LES approaches predict significantly better results than the under-resolved simulations at the same grid resolution. However, the results highly depend on the selection of the filtering procedure and the filter design. It is concluded that a complete attenuation for the smallest scales is crucial to prevent energy accumulation at the grid cut-off.

  8. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  9. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    Science.gov (United States)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  10. Blind system identification of two-thermocouple sensor based on cross-relation method

    Science.gov (United States)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  11. [Sampling and measurement methods of the protocol design of the China Nine-Province Survey for blindness, visual impairment and cataract surgery].

    Science.gov (United States)

    Zhao, Jia-liang; Wang, Yu; Gao, Xue-cheng; Ellwein, Leon B; Liu, Hu

    2011-09-01

    To design the protocol of the China nine-province survey for blindness, visual impairment and cataract surgery to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery. The protocol design was began after accepting the task for the national survey for blindness, visual impairment and cataract surgery from the Department of Medicine, Ministry of Health, China, in November, 2005. The protocol in Beijing Shunyi Eye Study in 1996 and Guangdong Doumen County Eye Study in 1997, both supported by World Health Organization, was taken as the basis for the protocol design. The relative experts were invited to discuss and prove the draft protocol. An international advisor committee was established to examine and approve the draft protocol. Finally, the survey protocol was checked and approved by the Department of Medicine, Ministry of Health, China and Prevention Program of Blindness and Deafness, WHO. The survey protocol was designed according to the characteristics and the scale of the survey. The contents of the protocol included determination of target population and survey sites, calculation of the sample size, design of the random sampling, composition and organization of the survey teams, determination of the examinee, the flowchart of the field work, survey items and methods, diagnostic criteria of blindness and moderate and sever visual impairment, the measures of the quality control, the methods of the data management. The designed protocol became the standard and practical protocol for the survey to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery.

  12. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    Science.gov (United States)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  13. Prevalence and causes of corneal blindness.

    Science.gov (United States)

    Wang, Haijing; Zhang, Yaoguang; Li, Zhijian; Wang, Tiebin; Liu, Ping

    2014-04-01

    The study aimed to assess the prevalence and causes of corneal blindness in a rural northern Chinese population. Cross-sectional study. The cluster random sampling method was used to select the sample. This population-based study included 11 787 participants of all ages in rural Heilongjiang Province, China. These participants underwent a detailed interview and eye examination that included the measurement of visual acuity, slit-lamp biomicroscopy and direct ophthalmoscopy. An eye was considered to have corneal blindness if the visual acuity was blindness and low vision. Among the 10 384 people enrolled in the study, the prevalence of corneal blindness is 0.3% (95% confidence interval 0.2-0.4%). The leading cause was keratitis in childhood (40.0%), followed by ocular trauma (33.3%) and keratitis in adulthood (20.0%). Age and illiteracy were found to be associated with an increased prevalence of corneal blindness. Blindness because of corneal diseases in rural areas of Northern China is a significant public health problem that needs to be given more attention. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  14. Blind Quantum Signature with Blind Quantum Computation

    Science.gov (United States)

    Li, Wei; Shi, Ronghua; Guo, Ying

    2017-04-01

    Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.

  15. Improved system blind identification based on second-order ...

    Indian Academy of Sciences (India)

    An improved system blind identification method based on second- order cyclostationary statistics and the properties of group delay, has been ... In the last decade, there has been considerable research on achieving blind identification.

  16. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  17. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  18. Fuzzy-based simulation of real color blindness.

    Science.gov (United States)

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  19. Individual differences in change blindness

    OpenAIRE

    Bergmann, Katharina Verena

    2016-01-01

    The present work shows the existence of systematic individual differences in change blindness. It can be concluded that the sensitivity for changes is a trait. That is, persons differ in their ability to detect changes, independent from the situation or the measurement method. Moreover, there are two explanations for individual differences in change blindness: a) capacity differences in visual selective attention that may be influenced by top-down activated attention helping to focus attentio...

  20. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  1. An improved method for polarimetric image restoration in interferometry

    Science.gov (United States)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  2. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  3. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  4. Gene Expression Deconvolution for Uncovering Molecular Signatures in Response to Therapy in Juvenile Idiopathic Arthritis.

    Directory of Open Access Journals (Sweden)

    Ang Cui

    Full Text Available Gene expression-based signatures help identify pathways relevant to diseases and treatments, but are challenging to construct when there is a diversity of disease mechanisms and treatments in patients with complex diseases. To overcome this challenge, we present a new application of an in silico gene expression deconvolution method, ISOpure-S1, and apply it to identify a common gene expression signature corresponding to response to treatment in 33 juvenile idiopathic arthritis (JIA patients. Using pre- and post-treatment gene expression profiles only, we found a gene expression signature that significantly correlated with a reduction in the number of joints with active arthritis, a measure of clinical outcome (Spearman rho = 0.44, p = 0.040, Bonferroni correction. This signature may be associated with a decrease in T-cells, monocytes, neutrophils and platelets. The products of most differentially expressed genes include known biomarkers for JIA such as major histocompatibility complexes and interleukins, as well as novel biomarkers including α-defensins. This method is readily applicable to expression datasets of other complex diseases to uncover shared mechanistic patterns in heterogeneous samples.

  5. What is Color Blindness?

    Science.gov (United States)

    ... Color Blindness? Who Is at Risk for Color Blindness? Color Blindness Causes Color Blindness Diagnosis and Treatment How Color Blindness Is Tested What Is Color Blindness? Leer en Español: ¿Qué es el daltonismo? Written ...

  6. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  7. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  8. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    Directory of Open Access Journals (Sweden)

    Roerdink Jos BTM

    2008-04-01

    Full Text Available Abstract Background We present a simple, data-driven method to extract haemodynamic response functions (HRF from functional magnetic resonance imaging (fMRI time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs. Results ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling. The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series. Conclusion With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.

  9. Causes of blindness in a special education school.

    Science.gov (United States)

    Onakpoya, O H; Adegbehingbe, B O; Omotoye, O J; Adeoye, A O

    2011-01-01

    Blind children and young adults have to overcome a lifetime of emotional, social and economic difficulties. They employ non-vision dependent methods for education. To assess the causes of blindness in a special school in southwestern Nigeria to aid the development of efficient blindness prevention programmes. A cross-sectional survey of the Ekiti State Special Education School, Nigeria was conducted in May-June 2008 after approval from the Ministry of Education. All students in the blind section were examined for visual acuity, pen-torch eye examination and dilated fundoscopy in addition to taking biodata and history. Thirty blind students with mean age of 18±7.3 years and male: female ratio of 1.7:1 were examined. Blindness resulted commonly from cataract eight (26.7%), glaucoma six (20%) retinitis pigmentosa four (16.7%) and posttraumatic phthysis bulbi two (6.7%). Blindness was avoidable in 18 (61%) of cases. Glaucoma blindness was associated with redness, pain, lacrimation and photophobia in 15 (50%) and hyphaema in 16.7% of students; none of these students were on any medication at the time of study. The causes of blindness in rehabilitation school for the blind are largely avoidable and glaucoma-blind pupils face additional painful eye related morbidity during rehabilitation. While preventive measures and early intervention are needful against childhood cataract and glaucoma, regular ophthalmic consultations and medications are needed especially for glaucoma blind pupils.

  10. Investigation of the blindness status in Haimen of Jiangsu province

    Directory of Open Access Journals (Sweden)

    Dong-Bing Yuan

    2017-06-01

    Full Text Available AIM:To investigate the cause of blindness, except those caused by cataract, in Haimen city. METHODS:According to the WHO's criteria of blindness, the blindness level was decided through ophthalmic tests by associate chief or chief ophthalmologists who were trained especially for disability evaluation. The analysis of the the leading cause were taken too. RESULTS:Totally 3 266 persons were blindness, in which 2 118 were first level blindness, 1 148 persons were second lever blindness, and 1 308 persons were male, 1 958 were female. The leading cause of blindness were retina and uveitis diseases(31.58%, genetic diseases(23.47%, cornea disease(14.49%. CONCLUSION:The leading cause of blindness are retina and uveitis diseases, genetic diseases, cornea diseases in Haimen city of Jiangsu province. Early prevention and treatment should be strengthened to reduce the occurrence of blindness.

  11. Causes of visual impairment and blindness in children at Instituto Benjamin Constant Blind School, Rio de Janeiro

    Directory of Open Access Journals (Sweden)

    Daniela da Silva Verzoni

    Full Text Available Abstract Objective: To determine the main causes of visual impairment and blindness in children enrolled at Instituto Benjamin Constant blind school (IBC in 2013, to aid in planning for the prevention and management of avoidable causes of blindness. Methods: Study design: cross-sectional observational study. Data was collected from medical records of students attending IBC in 2013. Causes of blindness were classified according to WHO/PBL examination record. Data were analyzed for those children aged less than 16 years using Stata 9 program. Results: Among 355 students attending IBC in 2013, 253 (73% were included in this study. Of these children, 190 (75% were blind and 63 (25% visually impaired. The major anatomical site of visual loss was retina (42%, followed by lesions of the globe (22%, optic nerve lesions (13.8%, central nervous system (8.8% and cataract/pseudophakia/aphakia (8.8%. The etiology was unknown in 41.9% and neonatal factors accounted for 30,9% of cases. Forty-eight percent of cases were potentially avoidable. Retinopathy of prematurity (ROP was the main cause of blindness and with microphthalmia, optic nerve atrophy, cataract and glaucoma accounted for more than 50% of cases. Conclusion: Provision and improvement of ROP, cataract and glaucoma screening and treatment and programs could prevent avoidable visual impairment and blindness.

  12. Childhood blindness in India: a population based perspective

    Science.gov (United States)

    Dandona, R; Dandona, L

    2003-01-01

    Aim: To estimate the prevalence and causes of blindness in children in the southern Indian state of Andhra Pradesh. Methods: These data were obtained as part of two population based studies in which 6935 children ≤15 years of age participated. Blindness was defined as presenting distance visual acuity <6/60 in the better eye. Results: The prevalence of childhood blindness was 0.17% (95% confidence interval 0.09 to 0.30). Treatable refractive error caused 33.3% of the blindness, followed by 16.6% due to preventable causes (8.3% each due to vitamin A deficiency and amblyopia after cataract surgery). The major causes of the remaining blindness included congenital eye anomalies (16.7%) and retinal degeneration (16.7%). Conclusion: In the context of Vision 2020, the priorities for action to reduce childhood blindness in India are refractive error, cataract related amblyopia, and corneal diseases. PMID:12598433

  13. Iterated Gate Teleportation and Blind Quantum Computation.

    Science.gov (United States)

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  14. Visual impairment and blindness among the students of blind schools in Allahabad and its vicinity: A causal assessment

    Directory of Open Access Journals (Sweden)

    Sushank Ashok Bhalerao

    2015-01-01

    Full Text Available Background/Aims: Information on eye diseases in blind school children in Allahabad is rare and sketchy. A cross-sectional study was performed to identify causes of blindness (BL in blind school children with an aim to gather information on ocular morbidity in the blind schools in Allahabad and in its vicinity. Study Design and Setting: A cross-sectional study was carried out in all the four blind schools in Allahabad and its vicinity. Materials and Methods: The students in the blind schools visited were included in the study and informed consents from parents were obtained. Relevant ocular history and basic ocular examinations were carried out on the students of the blind schools. Results: A total of 90 students were examined in four schools of the blind in Allahabad and in the vicinity. The main causes of severe visual impairment and BL in the better eye of students were microphthalmos (34.44%, corneal scar (22.23%, anophthalmos (14.45%, pseudophakia (6.67%, optic nerve atrophy (6.67%, buphthalmos/glaucoma (3.33%, cryptophthalmos (2.22%, staphyloma (2.22%, cataract (2.22%, retinal dystrophy (2.22%, aphakia (1.11%, coloboma (1.11%, retinal detachment (1.11%, etc. Of these, 22 (24.44% students had preventable causes of BL and another 12 (13.33% students had treatable causes of BL. Conclusion: It was found that hereditary diseases, corneal scar, glaucoma and cataract were the prominent causes of BL among the students of blind schools. Almost 38% of the students had preventable or treatable causes, indicating the need of genetical counseling and focused intervention.

  15. Blindness causes analysis of 1854 hospitalized patients in Xinjiang

    Directory of Open Access Journals (Sweden)

    Tian-Zuo Wang

    2015-01-01

    Full Text Available AIM: To analyze the blindness causes of 1854 cases in our hospital hospitalized patients, and explore the strategy and direction of blindness prevention according to the different treatment efficacy.METHODS: Cluster sampling was used to select from September 2010 to August 2013 in our hospital department of ophthalmology patients 5 473 cases, in which total of 1 854 cases of blind patients, accounting for 33.88% of hospitalized patients. According to the WHO's criteria of blindness. The BCVA enacted RESULTS: In 1 854 cases of blind patients, including 728 people right-eye blinding, 767 people left-eyes blinding, 359 people total blinding, adding up to 2 213 eyes, aged from 60~80 years old were in the majority. The top three diseases resulting blindness were cataract, diabetic retinopathy and glaucoma. In 2 213 blind eyes, the eyes treated were 2 172, of which 1 762 eyes(81.12%were succeeded, 410 eyes(18.88%failed. In the failed cases, the first three diseases were diabetic retinopathy, glaucoma and retinal detachment. CONCLUSION: In recent years, disease etiology of blinding eye has changed, but cataracts, diabetic retinopathy and glaucoma are still high incidence of blindness due, so the treatment of diabetic retinopathy, glaucoma and retinal detachment should be the emphasis for blindness prevention and treatment in the future.

  16. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  17. Methods for the computer-generated presentation of spatial objects for blind people on tactile media

    OpenAIRE

    Kurze, Martin

    2010-01-01

    This work first explores existing methods to present graphical information to blind people; focus is put on haptic presentation, i.e. a touchable form of presentation. The work emphasises an approach which does not try to make the existing graphics touchable which were designed with sighted perceivers in mind and thus contain optic hints regarding structure and position of certain geometric features (e.g. occlusion or perspective distortion); rather the presented content itself, the model is ...

  18. Causes of blindness in blind unit of the school for the handicapped ...

    African Journals Online (AJOL)

    To describe the causes of blindness in pupils and staff in the blind unit of the School for the Handicapped in Kwara State. 2. To identify problems in the blind school and initiate intervention. All the blind or visually challenged people in the blind unit of the school for the handicapped were interviewed and examined using a ...

  19. New Physical Constraints for Multi-Frame Blind Deconvolution

    Science.gov (United States)

    2014-12-10

    Laboratory) Dr. Julian Christou (Large Binocular Telescope Observatory) REAL ACADEMIA DE CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115... CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115 BARCELONA, 08002 SPAIN 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  20. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  1. Age and sex distribution of blindness in Ahoada East Local ...

    African Journals Online (AJOL)

    Background: Differences exist in the impact of blindness by age and sex; the overall risk of death being higher for blind males than females. Aim: To describe the age and sex differences among the blind in Ahoada-East Local Government Area (LGA) of Rivers State, Nigeria. Methods: Age and sex data were analyzed for 24 ...

  2. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  3. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  4. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  5. A bibliography on blind methods for identifying image forgery

    Czech Academy of Sciences Publication Activity Database

    Saic, Stanislav; Mahdian, Babak

    2010-01-01

    Roč. 25, č. 6 (2010), s. 389-399 ISSN 0923-5965 R&D Projects: GA ČR(CZ) GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : Image forensics * Digital forgery * Image tampering * Blind forgery detection * Multimedia security Subject RIV: IN - Informatics, Computer Science Impact factor: 1.186, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/saic-0346813.pdf

  6. Navigation Problems in Blind-to-Blind Pedestrians Tele-assistance Navigation

    OpenAIRE

    Balata , Jan; Mikovec , Zdenek; Maly , Ivo

    2015-01-01

    International audience; We raise a question whether it is possible to build a large-scale navigation system for blind pedestrians where a blind person navigates another blind person remotely by mobile phone. We have conducted an experiment, in which we observed blind people navigating each other in a city center in 19 sessions. We focused on problems in the navigator’s attempts to direct the traveler to the destination. We observed 96 problems in total, classified them on the basis of the typ...

  7. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  8. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  9. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  10. The mathematics of a successful deconvolution: a quantitative assessment of mixture-based combinatorial libraries screened against two formylpeptide receptors.

    Science.gov (United States)

    Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia

    2013-05-30

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  11. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  12. The role of figure-ground segregation in change blindness.

    Science.gov (United States)

    Landman, Rogier; Spekreijse, Henk; Lamme, Victor A F

    2004-04-01

    Partial report methods have shown that a large-capacity representation exists for a few hundred milliseconds after a picture has disappeared. However, change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently. What happens to the large-capacity representation? New input after the first image may interfere, but this is likely to depend on the characteristics of the new input. In our first experiment, we show that a display containing homogeneous image elements between changing images does not render the large-capacity representation unavailable. Interference occurs when these new elements define objects. On that basis we introduce a new method to produce change blindness: The second experiment shows that change blindness can be induced by redefining figure and background, without an interval between the displays. The local features (line segments) that defined figures and background were swapped, while the contours of the figures remained where they were. Normally, changes are easily detected when there is no interval. However, our paradigm results in massive change blindness. We propose that in a change blindness experiment, there is a large-capacity representation of the original image when it is followed by a homogeneous interval display, but that change blindness occurs whenever the changed image forces resegregation of figures from the background.

  13. Sequential blind identification of underdetermined mixtures using a novel deflation scheme.

    Science.gov (United States)

    Zhang, Mingjian; Yu, Simin; Wei, Gang

    2013-09-01

    In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.

  14. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  15. Assessment of blinding success among dental implant clinical trials: A systematic review

    Directory of Open Access Journals (Sweden)

    Jafar Kolahi

    2015-01-01

    Full Text Available Introduction: It is widely believed that blinding is a cornerstone of randomized clinical trials and that significant bias may result from unsuccessful blinding. However, it is not enough to claim that a clinical trial is single- or double-blinded and that assessment of the success of blinding is ideal. The aim of this study was to evaluate the prevalence of assessment of blinding success among dental implant clinical trials and to introduce methods of blinding assessment to the implant research community. Methods: In November 2014, PubMed was searched by blinded and experienced researchers with the query "implant AND (blindFNx01 OR maskFNx01" using the following filters: (1 Article type: clinical trial; (2 Journal categories: dental journals; (3 Field: title/abstract. Consequently, title/abstract was reviewed in all relevant articles to find any attempt to assess the success of blinding in dental implant clinical trials. Results: The PubMed search results yielded 86 clinical trials. The point of interest is that when "blindFNx01 OR maskFNx01" was deleted from the query, the number of results increased to 1688 clinical trials. This shows that only 5% of dental implant clinical trials tried to use blinding. Disappointingly, we could not find any dental implant clinical trial reporting any attempt to assess the success of blinding. Conclusion: The current status of turning a blind eye to unblinding in dental implant clinical trials is not tolerable and needs to be improved. Researchers, protocol reviewers, local ethical committees, journal reviewers, and editors should make a concerted effort to incorporate, report, and publish such information to understand its potential impact on study results.

  16. Calendar systems and communication of deaf-blind children

    Directory of Open Access Journals (Sweden)

    Jablan Branka

    2012-01-01

    Full Text Available The aim of this paper is to explain the calendar systems and their role in teaching deaf-blind children. Deaf-blind persons belong to a group of multiple disabled persons. This disability should not be observed as a simple composite of visual and hearing impairments, but as a combination of sensory impairments that require special assistance in the development, communication and training for independent living. In our environment, deaf-blind children are being educated in schools for children with visual impairments or in schools for children with hearing impairments (in accordance with the primary impairment. However, deaf-blind children cannot be trained by means of special programs for children with hearing impairment, visual impairment or other programs for students with developmental disabilities without specific attention required by such a combination of sensory impairments. Deaf-blindness must be observed as a multiple impairment that requires special work methods, especially in the field of communication, whose development is severely compromised. Communication skills in deaf-blind people can be developed by using the calendar systems. They are designed in such a manner that they can be easily attainable to children with various sensory impairments. Calendars can be used to encourage and develop communication between adult persons and a deaf-blind child.

  17. The Burden of Blindness According To Age and Sex In Some ...

    African Journals Online (AJOL)

    Background: There are differences in the impact of blindness by age and sex; blind males having a higher risk for death than females. The aim of this study was to describe the age and sex difference among the blind in some Niger Delta communities of Nigeria. Methods: A community based, cross-sectional study was done ...

  18. A Simple Method for Estimating the Economic Cost of Productivity Loss Due to Blindness and Moderate to Severe Visual Impairment.

    Science.gov (United States)

    Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge

    2015-01-01

    To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.

  19. CONCEPTS OF COLORS IN CHILDREN WITH CONGENITAL BLINDNESS

    Directory of Open Access Journals (Sweden)

    Daniela DIMITROVA-RADOJICHIKJ

    2015-11-01

    Full Text Available This descriptive qualitative interview study in¬ves¬tigates knowledge of colours in students who are congenitally blind. The purpose of this research was to explore how the lack of direct experience with colour, as a result of congenital blindness, affects judgments about semantic concepts. Qualitative methods were used to conduct interviews with 15 students. The results of the study indicate that students know the colours and have a favourite colour. The implications for practice are to pay more attention when we teach students with congenital blindness to associate colours with specific objects.

  20. An Overwhelming Desire to Be Blind: Similarities and Differences between Body Integrity Identity Disorder and the Wish for Blindness

    Directory of Open Access Journals (Sweden)

    Katja Gutschke

    2017-03-01

    Full Text Available Background: The urge to be permanently blind is an extremely rare mental health disturbance. The underlying cause of this desire has not been determined yet, and it is uncertain whether the wish for blindness is a condition that can be included in the context of body integrity identity disorder, a condition where people feel an overwhelming need to be disabled, in many cases by amputation of a limb or through paralysis. Objective: The aim of this study is to test the hypothesis that people with a desire for blindness suffer from a greater degree of visual stress in daily activities than people in a healthy visual control group. Method: We created a Likert scale questionnaire to measure visual stress, covering a wide range of everyday situations. The wish for blindness is extremely rare and worldwide only 5 people with an urge to be blind were found to participate in the study (4 female, 1 male. In addition, a control group of 35 (28 female, 7 male visually healthy people was investigated. Questions addressing issues that may be experienced by participants with a desire to be blind were integrated into the questionnaire. Results: The hypothesis that people with a desire for blindness suffer from a significantly higher visual overload in activities of daily living than visually healthy subjects was confirmed; the significance of visual stress between these groups was p < 0.01. In addition, an interview with the 5 affected participants supported the causal role of visual overload. Conclusions: The desire for blindness seems to originate from visual overload caused by either ophthalmologic or organic brain disturbances. In addition, psychological reasons such as certain personal character traits may play an active role in developing, maintaining, and reinforcing one’s desire to be blind.

  1. Childhood blindness at a school for the blind in Riyadh, Saudi Arabia.

    Science.gov (United States)

    Kotb, Amgad A; Hammouda, Ehab F; Tabbara, Khalid F

    2006-02-01

    To determine the major causes of eye diseases leading to visual loss and blindness among children attending a school for the blind in Riyadh, Saudi Arabia. A total of 217 school children with visual disabilities attending a school for the blind in Riyadh were included. All children were brought to The Eye Center, Riyadh, and had complete ophthalmologic examinations including visual acuity testing, biomicroscopy, ophthalmoscopy, tonometry and laboratory investigations. In addition, some patients were subjected to electroretinography (ERG), electrooculography (EOG), measurement of visual evoked potentials (VEP), and laboratory work-up for congenital disorders. There were 117 male students with an age range of 6-19 years and a mean age of 16 years. In addition, there were 100 females with an age range of 6-18 years and a mean age of 12 years. Of the 217 children, 194 (89%) were blind from genetically determined diseases or congenital disorders and 23 (11%) were blind from acquired diseases. The major causes of bilateral blindness in children were retinal degeneration, congenital glaucoma, and optic atrophy. The most common acquired causes of childhood blindness were infections and trauma. The etiological pattern of childhood blindness in Saudi Arabia has changed from microbial keratitis to genetically determined diseases of the retina and optic nerve. Currently, the most common causes of childhood blindness are genetically determined causes. Consanguineous marriages may account for the autosomal recessive disorders. Public education programs should include information for the prevention of trauma and genetic counseling. Eye examinations for preschool and school children are mandatory for the prevention and cure of blinding disorders.

  2. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    International Nuclear Information System (INIS)

    Greenberg, M.; Ebel, D.S.

    2009-01-01

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of ∼15 (micro)m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 (micro)m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  3. Blind source separation theory and applications

    CERN Document Server

    Yu, Xianchuan; Xu, Jindong

    2013-01-01

    A systematic exploration of both classic and contemporary algorithms in blind source separation with practical case studies    The book presents an overview of Blind Source Separation, a relatively new signal processing method.  Due to the multidisciplinary nature of the subject, the book has been written so as to appeal to an audience from very different backgrounds. Basic mathematical skills (e.g. on matrix algebra and foundations of probability theory) are essential in order to understand the algorithms, although the book is written in an introductory, accessible style. This book offers

  4. Altered sleep-wake patterns in blindness

    DEFF Research Database (Denmark)

    Aubin, S.; Gacon, C.; Jennum, P.

    2016-01-01

    discuss variability in the sleep–wake pattern between blind and normal-sighted individuals. Methods Thirty-day actigraphy recordings were collected from 11 blind individuals without residual light perception and 11 age- and sex-matched normal-sighted controls. From these recordings, we extracted...... the Pittsburgh Sleep Quality Index, and chronotype, using the Morningness-Eveningness Questionnaire. Results Although no group differences were found when averaging over the entire recording period, we found a greater variability throughout the 30-days in both sleep efficiency and timing of the night-time sleep...

  5. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  6. Rapid Assessment of Avoidable Blindness in Western Rwanda: Blindness in a Postconflict Setting

    OpenAIRE

    Mathenge, Wanjiku; Nkurikiye, John; Limburg, Hans; Kuper, Hannah

    2007-01-01

    Editors' Summary Background. VISION 2020, a global initiative that aims to eliminate avoidable blindness, has estimated that 75% of blindness worldwide is treatable or preventable. The WHO estimates that in Africa, around 9% of adults aged over 50 are blind. Some data suggest that people living in regions affected by violent conflict are more likely to be blind than those living in unaffected regions. Currently no data exist on the likely prevalence of blindness in Rwanda, a central African c...

  7. Global data on blindness.

    Science.gov (United States)

    Thylefors, B.; Négrel, A. D.; Pararajasegaram, R.; Dadzie, K. Y.

    1995-01-01

    Globally, it is estimated that there are 38 million persons who are blind. Moreover, a further 110 million people have low vision and are at great risk of becoming blind. The main causes of blindness and low vision are cataract, trachoma, glaucoma, onchocerciasis, and xerophthalmia; however, insufficient data on blindness from causes such as diabetic retinopathy and age-related macular degeneration preclude specific estimations of their global prevalence. The age-specific prevalences of the major causes of blindness that are related to age indicate that the trend will be for an increase in such blindness over the decades to come, unless energetic efforts are made to tackle these problems. More data collected through standardized methodologies, using internationally accepted (ICD-10) definitions, are needed. Data on the incidence of blindness due to common causes would be useful for calculating future trends more precisely. PMID:7704921

  8. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  9. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    Science.gov (United States)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  10. Blind estimation of blur in hyperspectral images

    Science.gov (United States)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial

  11. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  12. Three-step interferometric method with blind phase shifts by use of interframe correlation between interferograms

    Science.gov (United States)

    Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.

    2018-06-01

    A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.

  13. Blindness and severe visual impairment in pupils at schools for the blind in Burundi.

    Science.gov (United States)

    Ruhagaze, Patrick; Njuguna, Kahaki Kimani Margaret; Kandeke, Lévi; Courtright, Paul

    2013-01-01

    To determine the causes of childhood blindness and severe visual impairment in pupils attending schools for the blind in Burundi in order to assist planning for services in the country. All pupils attending three schools for the blind in Burundi were examined. A modified WHO/PBL eye examination record form for children with blindness and low vision was used to record the findings. Data was analyzed for those who became blind or severely visually impaired before the age of 16 years. Overall, 117 pupils who became visually impaired before 16 years of age were examined. Of these, 109 (93.2%) were blind or severely visually impaired. The major anatomical cause of blindness or severe visual impairment was cornea pathology/phthisis (23.9%), followed by lens pathology (18.3%), uveal lesions (14.7%) and optic nerve lesions (11.9%). In the majority of pupils with blindness or severe visual impairment, the underlying etiology of visual loss was unknown (74.3%). More than half of the pupils with lens related blindness had not had surgery; among those who had surgery, outcomes were generally poor. The causes identified indicate the importance of continuing preventive public health strategies, as well as the development of specialist pediatric ophthalmic services in the management of childhood blindness in Burundi. The geographic distribution of pupils at the schools for the blind indicates a need for community-based programs to identify and refer children in need of services.

  14. Rehabilitation of cortical blindness secondary to stroke.

    Science.gov (United States)

    Gaber, Tarek A-Z K

    2010-01-01

    Cortical blindness is a rare complication of posterior circulation stroke. However, its complex presentation with sensory, physical, cognitive and behavioural impairments makes it one of the most challenging. Appropriate approach from a rehabilitation standpoint was never reported. Our study aims to discuss the rehabilitation methods and outcomes of a cohort of patients with cortical blindness. The notes of all patients with cortical blindness referred to a local NHS rehabilitation service in the last 6~years were examined. Patients' demographics, presenting symptoms, scan findings, rehabilitation programmes and outcomes were documented. Seven patients presented to our service, six of them were males. The mean age was 63. Patients 1, 2 and 3 had total blindness with severe cognitive and behavioural impairments, wandering and akathisia. All of them failed to respond to any rehabilitation effort and the focus was on damage limitation. Pharmacological interventions had a modest impact on behaviour and sleep pattern. The 3 patients were discharged to a nursing facility. Patients 4, 5, 6 and 7 had partial blindness with variable severity. All of them suffered from significant memory impairment. However, none suffered from any behavioural, physical or other cognitive impairment. Rehabilitation efforts on 3 patients were carried out collaboratively between brain injury occupational therapists and sensory disability officers. All patients experienced significant improvement in handicap and they all maintained community placements. This small cohort of patients suggests that the rehabilitation philosophy and outcomes of these 2 distinct groups of either total or partial cortical blindness differ significantly.

  15. A Study of Color Transformation on Website Images for the Color Blind

    OpenAIRE

    Siew-Li Ching; Maziani Sabudin

    2010-01-01

    In this paper, we study on color transformation method on website images for the color blind. The most common category of color blindness is red-green color blindness which is viewed as beige color. By transforming the colors of the images, the color blind can improve their color visibility. They can have a better view when browsing through the websites. To transform colors on the website images, we study on two algorithms which are the conversion techniques from RGB colo...

  16. Blind Analysis in Particle Physics

    International Nuclear Information System (INIS)

    Roodman, A

    2003-01-01

    A review of the blind analysis technique, as used in particle physics measurements, is presented. The history of blind analyses in physics is briefly discussed. Next the dangers of and the advantages of a blind analysis are described. Three distinct kinds of blind analysis in particle physics are presented in detail. Finally, the BABAR collaboration's experience with the blind analysis technique is discussed

  17. The Sokoto blind beggars: causes of blindness and barriers to rehabilitation services.

    Science.gov (United States)

    Balarabe, Aliyu Hamza; Mahmoud, Abdulraheem O; Ayanniyi, Abdulkabir Ayansiji

    2014-01-01

    To determine the causes of blindness and the barriers to accessing rehabilitation services (RS) among blind street beggars (bsb) in Sokoto, Nigeria. A cross-sectional survey of 202 bsb (VA blindness were diagnosed by clinical ophthalmic examination. There were 107 (53%) males and 95 (47%) females with a mean age of 49 years (SD 12.2). Most bsb 191 (94.6%) had non-formal education. Of 190 (94.1%) irreversibly bsb, 180/190 (94.7%) had no light perception (NPL) bilaterally. The major causes of blindness were non-trachomatous corneal opacity (60.8%) and trachoma corneal opacity (12.8%). There were 166 (82%) blind from avoidable causes and 190 (94.1%) were irreversibly blind with 76.1% due to avoidable causes. The available sub-standard RS were educational, vocational and financial support. The barriers to RS in the past included non-availability 151 (87.8%), inability to afford 2 (1.2%), unfelt need 4 (2.3%), family refusal 1 (0.6), ignorance 6 (3.5%) and being not linked 8 (4.7%). The barriers to RS during the study period included inability of 72 subjects (35.6%) to access RS and 59 (81.9%) were due to lack of linkage to the existing services. Corneal opacification was the major cause of blindness among bsb. The main challenges to RS include the inadequate services available, societal and users factors. Renewed efforts are warranted toward the prevention of avoidable causes of blindness especially corneal opacities. The quality of life of the blind street beggar should be improved through available, accessible and affordable well-maintained and sustained rehabilitation services.

  18. Contact Lenses for Color Blindness.

    Science.gov (United States)

    Badawy, Abdel-Rahman; Hassan, Muhammad Umair; Elsherif, Mohamed; Ahmed, Zubair; Yetisen, Ali K; Butt, Haider

    2018-06-01

    Color vision deficiency (color blindness) is an inherited genetic ocular disorder. While no cure for this disorder currently exists, several methods can be used to increase the color perception of those affected. One such method is the use of color filtering glasses which are based on Bragg filters. While these glasses are effective, they are high cost, bulky, and incompatible with other vision correction eyeglasses. In this work, a rhodamine derivative is incorporated in commercial contact lenses to filter out the specific wavelength bands (≈545-575 nm) to correct color vision blindness. The biocompatibility assessment of the dyed contact lenses in human corneal fibroblasts and human corneal epithelial cells shows no toxicity and cell viability remains at 99% after 72 h. This study demonstrates the potential of the dyed contact lenses in wavelength filtering and color vision deficiency management. © 2018 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Disorders of the Sleep-Wake Cycle in Blindness | Odeo | West ...

    African Journals Online (AJOL)

    BACKGROUND: Alteration of the intensity of light reaching the pineal gland through the visual pathway affects the sleepwake cycle in humans. OBJECTIVE: To determine the prevalence, types and severity of sleep-wake disorders in the blind and their relation to the degree and cause of blindness. METHODS: One hundred ...

  20. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  1. Central venous catheterization: comparison between interventional radiological procedure and blind surgical reocedure

    International Nuclear Information System (INIS)

    Song, Won Gyu; Jin, Gong Yong; Han, Young Min; Yu, He Chul

    2002-01-01

    To determine the usefulness and safety of radiological placement of a central venous catheter by prospectively comparing the results of interventional radiology and blind surgery. For placement of a central venous catheter, the blind surgical method was used in 78 cases (77 patients), and the interventional radiological method in 56 cases (54 patients). The male to female ratio was 66:68, and the patients' mean age was 48 (range, 18-80) years. A tunneled central venous catheter was used in 74 cases, and a chemoport in 60. We evaluated the success and duration of the procedures, the number of punctures required, and ensuing complications, comparing the results of the two methods. The success rates of the interventional radiological and the blind surgical procedure were 100% and 94.8%, respectively. The duration of central catheterization was 3-395 (mean, 120) day, that of chemoport was 160.9 days, and that of tunneled central venous catheter was 95.1 days. The mean number of punctures of the subclavian vein was 1.2 of interventional radiology, and 2.1 for blind surgery. The mean duration of the interventional radiology and the blind surgical procedure was, respectively, 30 and 40 minutes. The postprocedure complication rate was 27.6% (37 cases). Early complications occurred in nine cases (6.7%): where interventional radiology was used, there was one case of hematoma, and blind surgery gave rise to hematoma (n=2), pneumothorax (n=2), and early deviation of the catheter (n=4). Late complications occurred in 32 cases (23.9%). Interventional radiology involved infection (n=4), venous thrombosis (n=1), catheter displacement (n=2) and catheter obstruction (n=5), while the blind surgical procedure gave rise to infection (n=5), venous thrombosis (n=3), catheter displacement (n=4) and catheter obstruction (n=8). The success rate of interventional radiological placement of a central venous catheter was high and the complication rate was low. In comparison with the blind

  2. Postural control in blind subjects.

    Science.gov (United States)

    Soares, Antonio Vinicius; Oliveira, Cláudia Silva Remor de; Knabben, Rodrigo José; Domenech, Susana Cristina; Borges Junior, Noe Gomes

    2011-12-01

    To analyze postural control in acquired and congenitally blind adults. A total of 40 visually impaired adults participated in the research, divided into 2 groups, 20 with acquired blindness and 20 with congenital blindness - 21 males and 19 females, mean age 35.8 ± 10.8. The Brazilian version of Berg Balance Scale and the motor domain of functional independence measure were utilized. On Berg Balance Scale the mean for acquired blindness was 54.0 ± 2.4 and 54.4 ± 2.5 for congenitally blind subjects; on functional independence measure the mean for acquired blind group was 87.1 ± 4.8 and 87.3 ± 2.3 for congenitally blind group. Based upon the scale used the results suggest the ability to control posture can be developed by compensatory mechanisms and it is not affected by visual loss in congenitally and acquired blindness.

  3. Failure to Detect Deaf-Blindness in a Population of People with Intellectual Disability

    Science.gov (United States)

    Fellinger, J.; Holzinger, D.; Dirmhirn, A.; van Dijk, J.; Goldberg, D.

    2009-01-01

    Background: Early identification of deaf-blindness is essential to ensure appropriate management. Previous studies indicate that deaf-blindness is often missed. We aim to discover the extent to which deaf-blindness in people with intellectual disability (ID) is undiagnosed. Method: A survey was made of the 253 residents of an institute offering…

  4. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    Science.gov (United States)

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  6. Information Communication Technology to support and include Blind students in a school for all An Interview study of teachers and students’ experiences with inclusion and ICT support to blind students

    OpenAIRE

    Rony, Mahbubur Rahman

    2017-01-01

    The topic of this is this study is how blind students and teachers experiences Information Communication Technology as a tool to support and include blind students in a school for all. The study investigates how Information and Communication Technologies (ICT) enables blind students to adjust into non-special schools. The research method used to collect data is interview. The goal is to get insight to teachers and students’ experiences with inclusion and ICT as a tool to support blind student...

  7. Parenting Stress in Mothers of Mentally Retarded, Blind, Deaf and Physically Disabled Children

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Atefvahid

    2017-03-01

    Full Text Available Background and Objective: Parents of children with disabilities are poorer physical and mental health and greater stress experience. This study was conducted to evaluate Parenting stress in mothers of mentally retarded, blind, deaf and physically disabled children.Materials and Methods: This study was causal-comparative. The study population included 310 mothers of exceptional children (mothers of children with mental retardation, blind, deaf and physical-motor disabilities 7 to 12 years of age enrolled in primary schools in the academic year 90-1389 exceptional Tehran. Multi-stage cluster sampling method was used. The data obtained from questionnaires parenting stress using multivariate analysis of variance (MANOVA were analyzed.Results: The results showed that parenting stress in mothers of blind with mentally retarded, deaf with mentally retarded, physically with blind and deaf children are significantly different. As well as, there was significant difference between the mean score of blind, physical disorders, mentally retarded and deaf groups in terms of distraction- hyperactivity subscale.Conclusion: Mothers of children with mental retardation, physical disorders, blind and deaf have most parenting stress respectively.

  8. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp; Hassibi, Babak

    2017-01-01

    that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first

  9. CONCEPTS OF COLORS IN CHILDREN WITH CONGENITAL BLINDNESS

    OpenAIRE

    DIMITROVA-RADOJICHIKJ Daniela

    2015-01-01

    This descriptive qualitative interview study in¬ves¬tigates knowledge of colours in students who are congenitally blind. The purpose of this research was to explore how the lack of direct experience with colour, as a result of congenital blindness, affects judgments about semantic concepts. Qualitative methods were used to conduct interviews with 15 students. The results of the study indicate that students know the colours and have a favourite colour. The implications for practice are to pay ...

  10. Causes and emerging trends of childhood blindness: findings from schools for the blind in Southeast Nigeria.

    Science.gov (United States)

    Aghaji, Ada; Okoye, Obiekwe; Bowman, Richard

    2015-06-01

    To ascertain the causes severe visual impairment and blindness (SVI/BL) in schools for the blind in southeast Nigeria and to evaluate temporal trends. All children who developed blindness at schools for the blind in southeast Nigeria were examined. All the data were recorded on a WHO/Prevention of Blindness (WHO/PBL) form entered into a Microsoft Access database and transferred to STATA V.12.1 for analysis. To estimate temporal trends in causes of blindness, older (>15 years) children were compared with younger (≤15 years) children. 124 children were identified with SVI/BL. The most common anatomical site of blindness was the lens (33.9%). Overall, avoidable blindness accounted for 73.4% of all blindness. Exploring trends in SVI/BL between children ≤15 years of age and those >15 years old, this study shows a reduction in avoidable blindness but an increase in cortical visual impairment in the younger age group. The results from this study show a statistically significant decrease in avoidable blindness in children ≤15 years old. Corneal blindness appears to be decreasing but cortical visual impairment seems to be emerging in the younger age group. Appropriate strategies for the prevention of avoidable childhood blindness in Nigeria need to be developed and implemented. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. PHYSIOTHERAPY OF BLIND AND LOW VISION INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    Alenka Tatjana Sterle

    2002-12-01

    Full Text Available Background. The authors present a preventive physiotherapy programme intended to improve the well-being of persons who have been blind or visually impaired since birth or experience partial or complete loss of vision later in life as a result of injury or disease.Methods. Different methods and techniques of physiotherapy, kinesitherapy and relaxation used in the rehabilitation of visually impaired persons are described.Results. The goals of timely physical treatment are to avoid unnecessary problems, such as improper posture, tension of the entire body, face and eyes, and deterioration of facial expression, that often accompany partial or complete loss of vision. Regular training improves functional skills, restores the skills that have been lost, and prevents the development of defects and consequent disorders of the locomotor apparatus.Conclusions. It is very difficult to change the life style and habits of blind and visually imapired persons. Especially elderly people who experience complete or partial loss of vision later in their lives are often left to their fate. Therefore blind and visually impaired persons of all age groups should be enrolled in a suitable rehabilitation programme that will improve the quality of their life.

  12. Process-Driven Math: An Auditory Method of Mathematics Instruction and Assessment for Students Who Are Blind or Have Low Vision

    Science.gov (United States)

    Gulley, Ann P.; Smith, Luke A.; Price, Jordan A.; Prickett, Logan C.; Ragland, Matthew F.

    2017-01-01

    Process-Driven Math is a fully audio method of mathematics instruction and assessment that was created at Auburn University at Montgomery, Alabama, to meet the needs of one particular student, Logan. He was blind, mobility impaired, and he could not speak above a whisper. Logan was not able to use traditional low vision tools like braille and…

  13. Determination of selected endogenous anabolic androgenic steroids and ratios in urine by ultra high performance liquid chromatography tandem mass spectrometry and isotope pattern deconvolution.

    Science.gov (United States)

    Pitarch-Motellón, J; Sancho, J V; Ibáñez, M; Pozo, O; Roig-Navarro, A F

    2017-09-15

    An isotope dilution mass spectrometry (IDMS) method for the determination of selected endogenous anabolic androgenic steroids (EAAS) in urine by UHPLC-MS/MS has been developed using the isotope pattern deconvolution (IPD) mathematical tool. The method has been successfully validated for testosterone, epitestosterone, androsterone and etiocholanolone, employing their respective deuterated analogs using two certified reference materials (CRM). Accuracy was evaluated as recovery of the certified values and ranged from 75% to 108%. Precision was assessed in intraday (n=5) and interday (n=4) experiments, with RSDs below 5% and 10% respectively. The method was also found suitable for real urine samples, with limits of detection (LOD) and quantification (LOQ) below the normal urinary levels. The developed method meets the requirements established by the World Anti-Doping Agency for the selected steroids for Athlete Biological Passport (ABP) measurements, except in the case of androsterone, which is currently under study. Copyright © 2017. Published by Elsevier B.V.

  14. Isotropic non-white matter partial volume effects in constrained spherical deconvolution

    Directory of Open Access Journals (Sweden)

    Timo eRoine

    2014-03-01

    Full Text Available Diffusion-weighted (DW magnetic resonance imaging (MRI is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM of the brain. Significant partial volume effects (PVE are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM and cerebrospinal fluid (CSF, and by multiple nonparallel WM fiber populations. High angular resolution diffusion imaging (HARDI methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD. Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNR, fiber configurations, and tissue fractions.Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50 % of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50 % GM volume for maximum spherical harmonics orders of 8 and below, and already with 25 % GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm2, reasonable SNR (~30 and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs

  15. Isotropic non-white matter partial volume effects in constrained spherical deconvolution.

    Science.gov (United States)

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan

    2014-01-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.

  16. Determination of the Projected Atomic Potential by Deconvolution of the Auto-Correlation Function of TEM Electron Nano-Diffraction Patterns

    Directory of Open Access Journals (Sweden)

    Liberato De Caro

    2016-11-01

    Full Text Available We present a novel method to determine the projected atomic potential of a specimen directly from transmission electron microscopy coherent electron nano-diffraction patterns, overcoming common limitations encountered so far due to the dynamical nature of electron-matter interaction. The projected potential is obtained by deconvolution of the inverse Fourier transform of experimental diffraction patterns rescaled in intensity by using theoretical values of the kinematical atomic scattering factors. This novelty enables the compensation of dynamical effects typical of transmission electron microscopy (TEM experiments on standard specimens with thicknesses up to a few tens of nm. The projected atomic potentials so obtained are averaged on sample regions illuminated by nano-sized electron probes and are in good quantitative agreement with theoretical expectations. Contrary to lens-based microscopy, here the spatial resolution in the retrieved projected atomic potential profiles is related to the finer lattice spacing measured in the electron diffraction pattern. The method has been successfully applied to experimental nano-diffraction data of crystalline centrosymmetric and non-centrosymmetric specimens achieving a resolution of 65 pm.

  17. Causse of Adult Blindness at ECWA Eye Hospital, Kano | Olatunji ...

    African Journals Online (AJOL)

    Aim: To identify the cause of adult blindness at ECWA Eye Hospital, Kano. Materials and methods: It was a hospital-based prospective study. Blindness was defined as vision of <3/60 in the better eye. The history of each patient was taken and a routine ocular examination was conducted using a Snellen or E-chart, a pen ...

  18. Fair quantum blind signatures

    International Nuclear Information System (INIS)

    Tian-Yin, Wang; Qiao-Yan, Wen

    2010-01-01

    We present a new fair blind signature scheme based on the fundamental properties of quantum mechanics. In addition, we analyse the security of this scheme, and show that it is not possible to forge valid blind signatures. Moreover, comparisons between this scheme and public key blind signature schemes are also discussed. (general)

  19. Image restoration for civil engineering structure monitoring using imaging system embedded on UAV

    Science.gov (United States)

    Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem

    2013-04-01

    estimated blur kernel for performing the deconvolution of the acquired image. In the present work, different regularization methods, mainly based on the pseudo norm aforementioned Total Variation, are studied and analysed. The key point of their respective implementation, their properties and limits are investigated in this particular applicative context. References [1] J. Hadamard. Lectures on Cauchy's problem in linear partial differential equations. Yale University Press, 1923. [2] A. N. Tihonov. On the resolution of incorrectly posed problems and regularisation method (in Russian). Doklady A. N.SSSR, 151(3), 1963. [3] C. R. Vogel. Computational Methods for inverse problems, SIAM, 2002. [4] A. K. Katsaggelos, J. Biemond, R.W. Schafer, and R. M. Mersereau, "A regularized iterative image restoration algorithm," IEEE Transactions on Signal Processing, vol.39, no. 4, pp. 914-929, 1991. [5] J. Biemond, R. L. Lagendijk, and R. M. Mersereau, "Iterative methods for image deblurring," Proceedings of the IEEE, vol. 78, no. 5, pp. 856-883, 1990. [6] D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine, vol. 13, no. 3, pp. 43-64, 1996. [7] Y. L. You and M. Kaveh, "A regularization approach to joint blur identification and image restoration," IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 416-428, 1996. [8] T. F. Chan and C. K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998. [9] S. Chardon, B. Vozel, and K. Chehdi. Parametric Blur Estimation Using the GCV Criterion and a Smoothness Constraint on the Image. Multidimensional Systems and Signal Processing Journal, Kluwer Ed., 10:395-414, 1999 [10] B. Vozel, K. Chehdi, and J. Dumoulin. Myopic image restoration for civil structures inspection using UAV (in French). In GRETSI, 2005. [11] L. Bar, N. Sochen, and N. Kiryati. Semi-blind image restoration via Mumford-Shah regularization. IEEE Transactions on Image Processing, 15

  20. [Visual impairment and blindness in children in a Malawian school for the blind].

    Science.gov (United States)

    Schulze Schwering, M; Nyrenda, M; Spitzer, M S; Kalua, K

    2013-08-01

    The aim of this study was to determine the anatomic sites of severe visual impairment and blindness in children in an integrated school for the blind in Malawi, and to compare the results with those of previous Malawian blind school studies. Children attending an integrated school for the blind in Malawi were examined in September 2011 using the standard WHO/PBL eye examination record for children with blindness and low vision. Visual acuity [VA] of the better eye was classified using the standardised WHO reporting form. Fifty-five pupils aged 6 to 19 years were examined, 39 (71 %) males, and 16 (29 %) females. Thirty eight (69%) were blind [BL], 8 (15 %) were severely visually impaired [SVI], 8 (15 %) visually impaired [VI], and 1 (1.8 %) was not visually impaired [NVI]. The major anatomic sites of visual loss were optic nerve (16 %) and retina (16 %), followed by lens/cataract (15 %), cornea (11 %) and lesions of the whole globe (11 %), uveal pathologies (6 %) and cortical blindness (2 %). The exact aetiology of VI or BL could not be determined in most children. Albinism accounted for 13 % (7/55) of the visual impairments. 24 % of the cases were considered to be potentially avoidable: refractive amblyopia among pseudophakic patients and corneal scaring. Optic atrophy, retinal diseases (mostly albinism) and cataracts were the major causes of severe visual impairment and blindness in children in an integrated school for the blind in Malawi. Corneal scarring was now the fourth cause of visual impairment, compared to being the commonest cause 35 years ago. Congenital cataract and its postoperative outcome were the commonest remedial causes of visual impairment. Georg Thieme Verlag KG Stuttgart · New York.

  1. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  2. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    Science.gov (United States)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source

  3. A survey of visual impairment and blindness in children attending four schools for the blind in Cambodia.

    Science.gov (United States)

    Sia, David I T; Muecke, James; Hammerton, Michael; Ngy, Meng; Kong, Aimee; Morse, Anna; Holmes, Martin; Piseth, Horm; Hamilton, Carolyn; Selva, Dinesh

    2010-08-01

    To identify the causes of blindness and severe visual impairment (BL/SVI) in children attending four schools for the blind in Cambodia and to provide spectacles, low vision aids, orientation and mobility training and ophthalmic treatment. Children schools for the blind in Cambodia. Causes of visual impairment and blindness were determined and categorized using World Health Organization methods. Of the 95 children examined, 54.7% were blind (BL) and 10.5% were severely visually impaired (SVI). The major anatomical site of BL/SVI was the lens in 27.4%, cornea in 25.8%, retina in 21% and whole globe in 17.7%. The major underlying etiologies of BL/SVI were hereditary factors (mainly cataract and retinal dystrophies) in 45.2%, undetermined/unknown (mainly microphthalmia and anterior segment dysgenesis) in 38.7% and childhood factors in 11.3%. Avoidable causes of BL/SVI accounted for 50% of the cases; 12.9% of the total were preventable with measles being the commonest cause (8.1% of the total); 37.1% were treatable with cataracts and glaucoma being the commonest causes (22.6% and 4.8% respectively). More than 35% of children required an optical device and 27.4% had potential for visual improvement with intervention. Half of the BL/SVI causes were potentially avoidable. The data support the need for increased coverage of measles immunization. There is also a need to develop specialized pediatric ophthalmic services for the management of surgically remediable conditions, to provide optometric, low vision and orientation and mobility services. Genetic risk counseling services also may be considered.

  4. Beliefs and Attitude to Eye Disease and Blindness in Rural Anambra ...

    African Journals Online (AJOL)

    Abstract. Objectives: To determine (a) the beliefs and knowledge of the eye diseases/blindness; (b) the actions taken to alleviate eye diseases/blindness; (c) the disposition towards optical aids and surgery among adults in rural Anambra State, Nigeria. Materials and Methods: three villages in the onchocercal endemic area ...

  5. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Science.gov (United States)

    Vashist, Praveen; Senjam, Suraj Singh; Gupta, Vivek; Gupta, Noopur; Kumar, Atul

    2017-02-01

    A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB), Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO) IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  6. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Directory of Open Access Journals (Sweden)

    Praveen Vashist

    2017-01-01

    Full Text Available A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB, Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  7. Survey of childhood blindness and visual impairment in Botswana

    Science.gov (United States)

    Nallasamy, Sudha; Anninger, William V; Quinn, Graham E; Kroener, Brian; Zetola, Nicola M; Nkomazana, Oathokwa

    2014-01-01

    Background/aims In terms of blind-person years, the worldwide burden of childhood blindness is second only to cataracts. In many developing countries, 30–72% of childhood blindness is avoidable. The authors conducted this study to determine the causes of childhood blindness and visual impairment (VI) in Botswana, a middle-income country with limited access to ophthalmic care. Methods This study was conducted over 4 weeks in eight cities and villages in Botswana. Children were recruited through a radio advertisement and local outreach programmes. Those ≤15 years of age with visual acuity Blindness Eye Examination Record for Children with Blindness and Low Vision was used to record data. Results The authors enrolled 241 children, 79 with unilateral and 162 with bilateral VI. Of unilateral cases, 89% were avoidable: 23% preventable (83% trauma-related) and 66% treatable (40% refractive error and 31% amblyopia). Of bilateral cases, 63% were avoidable: 5% preventable and 58% treatable (33% refractive error and 31% congenital cataracts). Conclusion Refractive error, which is easily correctable with glasses, is the most common cause of bilateral VI, with cataracts a close second. A nationwide intervention is currently being planned to reduce the burden of avoidable childhood VI in Botswana. PMID:21242581

  8. Evaluation of obstructive uropathy by deconvolution analysis of {sup 99m}Tc-mercaptoacetyltriglycine ({sup 99m}Tc-MAG3) renal scintigraphic data. A comparison with diuresis renography

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Yoshiyuki [Mie Univ., Tsu (Japan). School of Medicine

    1997-06-01

    Clinical significance of ERPF (effective renal plasma flow) and MTT (mean transit time) calculated by deconvolution analysis was studied in patients with obstructive uropathy. Subjects were 84 kidneys of 38 patients and 4 people without renal abnormality (22 males and 20 females) whose age was 53.8 y in a mean. Scintigraphy was done with a Toshiba {gamma}-camera GCA-7200A equipped with a low energy-high resolution collimator with the energy width of 149 keV{+-}20% at 20 min after loading of 500 ml of water and rapidly after intravenous administration of {sup 99m}Tc-MAG3 (200 MBq). At 5 min later, blood was collected and at 10 min, furosemide was intravenously given. Plasma radioactivity was measured in a well-type scintillation counter and was used for correction of blood concentration-time curve obtained from heart area data. Split MTT, regional MTT and ERPF were calculated by deconvolution analysis. Impaired transit was judged from renogram after furosemide loading and was classified into 6 types. ERPF was found lowered in cases of obstruction and in low renal function. Regional MTT was prolonged only in the former cases. The examination with the deconvolution analysis was concluded to be widely used since it gave useful information for the treatment. (K.H.)

  9. BlindSense: An Accessibility-inclusive Universal User Interface for Blind People

    Directory of Open Access Journals (Sweden)

    A. Khan

    2018-04-01

    Full Text Available A large number of blind people use smartphone-based assistive technology to perform their common activities. In order to provide a better user experience the existing user interface paradigm needs to be revisited. A new user interface model has been proposed in this paper. A simplified, semantically consistent, and blind-friendly adaptive user interface is provided. The proposed solution is evaluated through an empirical study on 63 blind people leveraging an improved user experience in performing common activities on a smartphone.

  10. Causes of blindness and career choice among pupils in a blind ...

    African Journals Online (AJOL)

    available eye care services Furthermore there is need for career talk in schools for the blind to ... career where their potential can be fully maximized. .... tropicamide 1% eye drops. .... Foster A, Gilbert C. Epidemiology of childhood blindness.

  11. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  12. A method to investigate drivers' acceptance of Blind Spot Detection System®.

    Science.gov (United States)

    Piccinini, Giuliofrancesco; Simões, Anabela; Rodrigues, Carlos Manuel; Leitão, Miguel

    2012-01-01

    Lately, with the goal of improving road safety, car makers developed and commercialised some Advanced Driver Assistance Systems (ADAS) which, through the detection of blind spot areas on the vehicle's sides, could help the drivers during the overtaking and the change lane task. Despite the possible benefits to reduce lateral crashes, the overall impact on road safety of such systems have not been deeply studied yet; notably, despite some researches have been carried out, there is a lack of studies regarding the long-term usage and drivers' acceptance of those systems. In order to fill the research gap, a methodology, based on the combination of focus groups interviews, questionnaires and a small-scale field operational test (FOT), has been designed in this study; such a methodology aims at evaluating drivers' acceptance of Blind Spot Information System® and at proposing some ideas to improve the usability and user-friendliness of this (or similar) device in their future development.

  13. Prevalence of blindness and diabetic retinopathy in northern Jordan.

    Science.gov (United States)

    Rabiu, Mansur M; Al Bdour, Muawyah D; Abu Ameerh, Mohammed A; Jadoon, Muhammed Z

    2015-01-01

    To estimate the prevalence of blindness, visual impairment, diabetes, and diabetic retinopathy in north Jordan (Irbid) using the rapid assessment of avoidable blindness and diabetic retinopathy methodology. A multistage cluster random sampling technique was used to select participants for this survey. A total of 108 clusters were selected using probability proportional to size method while subjects within the clusters were selected using compact segment method. Survey teams moved from house to house in selected segments examining residents 50 years and older until 35 participants were recruited. All eligible people underwent a standardized examination protocol, which included ophthalmic examination and random blood sugar test using digital glucometers (Accu-Chek) in their homes. Diabetic retinopathy among diabetic patients was assessed through dilated fundus examination. A total of 3638 out of the 3780 eligible participants were examined. Age- and sex-adjusted prevalence of blindness, severe visual impairment, and visual impairment with available correction were 1.33% (95% confidence interval [CI] 0.87-1.73), 1.82% (95% CI 1.35-2.25), and 9.49% (95% CI 8.26-10.74), respectively, all higher in women. Untreated cataract and diabetic retinopathy were the major causes of blindness, accounting for 46.7% and 33.2% of total blindness cases, respectively. Glaucoma was the third major cause, accounting for 8.9% of cases. The prevalence of diabetes mellitus was 28.6% (95% CI 26.9-30.3) among the study population and higher in women. The prevalence of any retinopathy among diabetic patients was 48.4%. Cataract and diabetic retinopathy are the 2 major causes of blindness and visual impairment in northern Jordan. For both conditions, women are primarily affected, suggesting possible limitations to access to services. A diabetic retinopathy screening program needs to proactively create sex-sensitive awareness and provide easily accessible screening services with prompt treatment.

  14. Causes of low vision and blindness in rural Indonesia

    Science.gov (United States)

    Saw, S-M; Husain, R; Gazzard, G M; Koh, D; Widjaja, D; Tan, D T H

    2003-01-01

    Aim: To determine the prevalence rates and major contributing causes of low vision and blindness in adults in a rural setting in Indonesia Methods: A population based prevalence survey of adults 21 years or older (n=989) was conducted in five rural villages and one provincial town in Sumatra, Indonesia. One stage household cluster sampling procedure was employed where 100 households were randomly selected from each village or town. Bilateral low vision was defined as habitual VA (measured using tumbling “E” logMAR charts) in the better eye worse than 6/18 and 3/60 or better, based on the WHO criteria. Bilateral blindness was defined as habitual VA worse than 3/60 in the better eye. The anterior segment and lens of subjects with low vision or blindness (both unilateral and bilateral) (n=66) were examined using a portable slit lamp and fundus examination was performed using indirect ophthalmoscopy. Results: The overall age adjusted (adjusted to the 1990 Indonesia census population) prevalence rate of bilateral low vision was 5.8% (95% confidence interval (CI) 4.2 to 7.4) and bilateral blindness was 2.2% (95% CI 1.1 to 3.2). The rates of low vision and blindness increased with age. The major contributing causes for bilateral low vision were cataract (61.3%), uncorrected refractive error (12.9%), and amblyopia (12.9%), and the major cause of bilateral blindness was cataract (62.5%). The major causes of unilateral low vision were cataract (48.0%) and uncorrected refractive error (12.0%), and major causes of unilateral blindness were amblyopia (50.0%) and trauma (50.0%). Conclusions: The rates of habitual low vision and blindness in provincial Sumatra, Indonesia, are similar to other developing rural countries in Asia. Blindness is largely preventable, as the major contributing causes (cataract and uncorrected refractive error) are amenable to treatment. PMID:12928268

  15. A Robust Blind Quantum Copyright Protection Method for Colored Images Based on Owner's Signature

    Science.gov (United States)

    Heidari, Shahrokh; Gheibi, Reza; Houshmand, Monireh; Nagata, Koji

    2017-08-01

    Watermarking is the imperceptible embedding of watermark bits into multimedia data in order to use for different applications. Among all its applications, copyright protection is the most prominent usage which conceals information about the owner in the carrier, so as to prohibit others from assertion copyright. This application requires high level of robustness. In this paper, a new blind quantum copyright protection method based on owners's signature in RGB images is proposed. The method utilizes one of the RGB channels as indicator and two remained channels are used for embedding information about the owner. In our contribution the owner's signature is considered as a text. Therefore, in order to embed in colored image as watermark, a new quantum representation of text based on ASCII character set is offered. Experimental results which are analyzed in MATLAB environment, exhibit that the presented scheme shows good performance against attacks and can be used to find out who the real owner is. Finally, the discussed quantum copyright protection method is compared with a related work that our analysis confirm that the presented scheme is more secure and applicable than the previous ones currently found in the literature.

  16. Wavelet-transform-based time–frequency domain reflectometry for reduction of blind spot

    International Nuclear Information System (INIS)

    Lee, Sin Ho; Park, Jin Bae; Choi, Yoon Ho

    2012-01-01

    In this paper, wavelet-transform-based time–frequency domain reflectometry (WTFDR) is proposed to reduce the blind spot in reflectometry. TFDR has a blind spot problem when the time delay between the reference signal and the reflected signal is short enough compared with the time duration of the reference signal. To solve the blind spot problem, the wavelet transform (WT) is used because the WT has linearity. Using the characteristics of the WT, the overlapped reference signal at the measured signal can be separated and the blind spot is reduced by obtaining the difference of the wavelet coefficients for the reference and reflected signals. In the proposed method, the complex wavelet is utilized as a mother wavelet because the reference signal in WTFDR has a complex form. Finally, the computer simulations and the real experiments are carried out to confirm the effectiveness and accuracy of the proposed method. (paper)

  17. Concepts and Criteria for Blind Quantum Source Separation and Blind Quantum Process Tomography

    Directory of Open Access Journals (Sweden)

    Alain Deville

    2017-07-01

    Full Text Available Blind Source Separation (BSS is an active domain of Classical Information Processing, with well-identified methods and applications. The development of Quantum Information Processing has made possible the appearance of Blind Quantum Source Separation (BQSS, with a recent extension towards Blind Quantum Process Tomography (BQPT. This article investigates the use of several fundamental quantum concepts in the BQSS context and establishes properties already used without justification in that context. It mainly considers a pair of electron spins initially separately prepared in a pure state and then submitted to an undesired exchange coupling between these spins. Some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum measurements, upon BQSS solutions, are discussed. An unentanglement criterion is established for the state of an arbitrary qubit pair, expressed first with probability amplitudes and secondly with probabilities. The interest of using the concept of a random quantum state in the BQSS context is presented. It is stressed that the concept of statistical independence of the sources, widely used in classical BSS, should be used with care in BQSS, and possibly replaced by some disentanglement principle. It is shown that the coefficients of the development of any qubit pair pure state over the states of an orthonormal basis can be expressed with the probabilities of results in the measurements of well-chosen spin components.

  18. Association Between Retinal Vascular Calibre and Blindness in Young Patients With Type 1 Diabetes

    DEFF Research Database (Denmark)

    Rasmussen, Malin Lundberg; Lundberg, Lars Kristian; Frydkjær-Olsen, Ulrik

    retinopathy ranged between no retinopathy (20 eyes, 55.6%), mild NPDR (15 eyes, 41.6%) and moderate NPDR (1 eye, 2.8%). From baseline retinal photos, central retinal artery and vein equivalent (CRAE and CRVE) was calculated in the validated semi-automated computer program IVAN using the Big6 method. Two eyes......Association Between Retinal Vascular Calibre and Blindness in Young Patients With Type 1 Diabetes Purpose To examine the association between retinal vascular calibre and incident blindness caused by diabetic retinopathy in young patients with type 1 diabetes. Methods A case-control study of 6...... years. Incident blindness was defined for patients who registered between 1995 and 2010 in the Danish Association of the Blind, which is a voluntary organization open for patients with a visual acuity at or below 6/60 (0.1) in the best eye. Each blind patient was matched with 3 controls regarding age...

  19. Models for the blind

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2014-01-01

    person to touch them in their historical context. And yet these objects are all about touch, from the concrete act of touching something to the norms that assigned touch a specific pedagogical role in nineteenth-century blind schools. The aim of this article is twofold. First, I provide a historical......When displayed in museum cabinets, tactile objects that were once used in the education of blind and visually impaired people, appear to us, sighted visitors, as anything but tactile. We cannot touch them due to museum policies and we can hardly imagine what it would have been like for a blind...... background to the tactile objects of the blind. When did they appear as a specific category of pedagogical aid and how did they help determine the relation between blindness, vision, and touch? Second, I address the tactile objects from the point of view of empirical sources and historical evidence. Material...

  20. Access to healthcare for disabled persons. How are blind people reached by HIV services?

    Science.gov (United States)

    Saulo, Bryson; Walakira, Eddy; Darj, Elisabeth

    2012-03-01

    Disabled people are overlooked and marginalised globally. There is a lack of information on blind people and HIV-related services and it is unclear how HIV-services target blind people in a sub-Saharan urban setting. To explore how blind people are reached by HIV-services in Kampala, Uganda. A purposeful sample of blind people and seeing healthcare workers were interviewed, and data on their opinions and experiences were collected. The data were analysed by qualitative content analysis, with a focus on manifest content. Three categories emerged from the study, reaching for HIV information and knowledge, lack of services, and experiences of discrimination. General knowledge on HIV prevention/transmission methods was good; however, there was scepticism about condom use. Blind people mainly relied on others for accessing HIV information, and a lack of special services for blind people to be able to test for HIV was expressed. The health service for blind people was considered inadequate, unequal and discriminatory, and harassment by healthcare staff was expressed, but not sexual abuse. Concerns about disclosure of personal medical information were revealed. Access to HIV services and other healthcare related services for blind people is limited and the objectives of the National Strategic Plan for HIV/AIDS 2007-2012 have not been achieved. There is a need for alternative methods for sensitisation and voluntary counselling and testing (VCT) for blind people. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  2. From perception to metacognition: Auditory and olfactory functions in early blind, late blind, and sighted individuals

    Directory of Open Access Journals (Sweden)

    Stina Cornell Kärnekull

    2016-09-01

    Full Text Available Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15, late blind (n = 15, and sighted (n = 30 participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity.

  3. Lived Experience of Self-Care in Blind Individuals: A Phenomenological Study

    Directory of Open Access Journals (Sweden)

    Mahmood Shamshir i

    2016-06-01

    Full Text Available Background and Objectives: Blindness is a serious condition, which can affect mental balance and general organized personality of blind person. This study was carried out with the purpose of explaining the lived experience of self-care in blind individuals. Methods: in this study, interpretive phenomenology was used to conduct the study. Through a purposeful sampling, 8 in-depth semi structured interviews were performed for 8 participants, which their duration was between 50-120 minutes. The method introduced by van Manen was used to perform the research and extract the concepts. Results: Analysis of the data led to three concepts, including “Being through discipline”, “independent through discipline”, and “protection of discipline”, which finally caused the appearance of the main concept, “self-care through a disciplined life”. Conclusion: Blind individuals benefit from a disciplined lifestyle to have an independent life, and also expect this kind of discipline from others. Therefore, it is necessary that family members and health care providers consider the concept of disciplined life.

  4. Optimized blind gamma-ray pulsar searches at fixed computing budget

    International Nuclear Information System (INIS)

    Pletsch, Holger J.; Clark, Colin J.

    2014-01-01

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fully coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.

  5. Study of the inner part of the β pictoris dust disk: deconvolution of 10 microns images and modelization of the dust emission

    International Nuclear Information System (INIS)

    Pantin, Eric

    1996-01-01

    In 1984, the observations of the infrared satellite IRAS showed that numerous main-sequence stars are surrounded by a relatively tenuous dust disk. The most studied example is the disk of the star Beta Pictoris. The corono-graphic observations are limited to the most outer regions of the disk. In infrared, it is not the case. We have used an infrared camera to obtain 10 microns images of the central regions. In order to be able to deduce the dust density, one has to fulfil some requirements. First, we had to de-convolute these images degraded by a combination of diffraction and seeing. We initially used standard methods (Richardson-Lucy, Maximum Entropy etc..), then we have developed a new method of astronomical images deconvolution and filtering based on a regularization by Multi-Scales Maximum Entropy. Then we have built a model of thermal emission of the dust to calculate the temperature of the grains. The resulting density shows a region between the star and 50-60 Astronomical Units, depleted of dust. The density is compatible with models simulating the gravitational interactions between such a disk and a planet having a mass the half of Saturn's mass. We have refined the models of the particles' emission: mixture of several materials, porous particles or not, coated with ice or not, to build a global model of the disk taking into account all the observables: IRAS infrared fluxes, 10 and 20 microns fluxes, 10 microns spectrum, scattered fluxes in the visible. In our best model, the particles are porous silicate grains (mixture of olivine and pyroxene) coated with a refractory organics mantle, which becomes 'frozen' (coated with ice) beyond a distance of 90 Astronomical Units from the star. This model allows us to predict an infrared spectrum showing the characteristic emission of the ice around 45-50 microns, that will be compared to the observations of the infrared satellite ISO. (author) [fr

  6. Comparisons of coded aperture imaging using various apertures and decoding methods

    International Nuclear Information System (INIS)

    Chang, L.T.; Macdonald, B.; Perez-Mendez, V.

    1976-07-01

    The utility of coded aperture γ camera imaging of radioisotope distributions in Nuclear Medicine is in its ability to give depth information about a three dimensional source. We have calculated imaging with Fresnel zone plate and multiple pinhole apertures to produce coded shadows and reconstruction of these shadows using correlation, Fresnel diffraction, and Fourier transform deconvolution. Comparisons of the coded apertures and decoding methods are made by evaluating their point response functions both for in-focus and out-of-focus image planes. Background averages and standard deviations were calculated. In some cases, background subtraction was made using combinations of two complementary apertures. Results using deconvolution reconstruction for finite numbers of events are also given

  7. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    Science.gov (United States)

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  8. Postictal blindness in adults.

    OpenAIRE

    Sadeh, M; Goldhammer, Y; Kuritsky, A

    1983-01-01

    Cortical blindness following grand mal seizures occurred in five adult patients. The causes of seizures included idiopathic epilepsy, vascular accident, brain cyst, acute encephalitis and chronic encephalitis. Blindness was permanent in one patients, but the others recovered within several days. Since most of the patients were either unaware of or denied their blindness, it is possible that this event often goes unrecognised. Cerebral hypoxia is considered the most likely mechanism.

  9. INTRODUCTION Childhood blindness is increasingly becoming a ...

    African Journals Online (AJOL)

    number of blind years resulting from blindness in children is also equal to the number of blind years due to age related cataract.10 The burden of disability in terms of blind years in these children represents a major. CAUSES OF BLINDNESS AND VISUAL IMPAIRMENT AT THE SCHOOL FOR THE. BLIND OWO, NIGERIA.

  10. Blind Separation of Nonstationary Sources Based on Spatial Time-Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Zhang Yimin

    2006-01-01

    Full Text Available Blind source separation (BSS based on spatial time-frequency distributions (STFDs provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localized in the time-frequency (t-f domain. In this paper, we propose the use of STFD matrices for both whitening and recovery of the mixing matrix, which are two stages commonly required in many BSS methods, to provide robust BSS performance to noise. In addition, a simple method is proposed to select the auto- and cross-term regions of time-frequency distribution (TFD. To further improve the BSS performance, t-f grouping techniques are introduced to reduce the number of signals under consideration, and to allow the receiver array to separate more sources than the number of array sensors, provided that the sources have disjoint t-f signatures. With the use of one or more techniques proposed in this paper, improved performance of blind separation of nonstationary signals can be achieved.

  11. Driven-Walking for Visually Impaired/Blind People through WiMAX

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2010-03-01

    Full Text Available It is known that people who are blind/visually impaired find it difficult to move, especially in unknown places. Usually the only help they have is their walking stick (white cane, a guide dog and sometimes special warning sounds or road signals at specific positions. Material and Method: In this paper we are trying to find a solution on how to build an appropriate navigating system for blind people. Results: Based on benefits of powerful properties of mobile WiMAX standard we suggest an important navigate application which can translate a digital visual environment properly for blind/visually impaired users through a plethora of combinations such as voice, brain or tongue signals. Conclusions: We believe that such an idea will be an initial point for a plethora of applications which will eliminate walking disabilities of blind/visually people.

  12. Survey of blindness and low vision in Egbedore, South-Western Nigeria.

    Science.gov (United States)

    Kolawole, O U; Ashaye, A O; Adeoti, C O; Mahmoud, A O

    2010-01-01

    Developing efficient and cost-effective eye care programmes for communities in Nigeria has been hampered by inadequate and inaccurate data on blindness and low vision. To determine the prevalence and causes of blindness and low vision among adults 50 years and older in South-Western Nigeria in order to develop viable eye care programme for the community. Twenty clusters of 60 subjects of age 50 years and older were selected by systematic random cluster sampling. Information was collected and ocular examinations were conducted on each consenting subject. Data were recorded in specially designed questionnaire and analysed using descriptive statistical methods. Out of the 1200 subjects enrolled for the study, 1183(98.6%) were interviewed and examined. Seventy five (6.3%)) of the 1183 subjects were bilaterally blind and 223(18.9%) had bilateral low vision according to WHO definition of blindness and low vision. Blindness was about 1.6 times commoner in men than women. Cataract, glaucoma and posterior segment disorders were major causes of bilateral blindness. Bilateral low vision was mainly due to cataract, refractive errors and posterior segment disorders. The prevalence of blindness and low vision in this study population was high. The main causes are avoidable. Elimination of avoidable blindness and low vision calls for attention and commitment from government and eye care workers in South Western Nigeria.

  13. Adapting diagrams from physics textbooks: a way to improve the autonomy of blind students

    Science.gov (United States)

    Dickman, A. G.; Martins, A. O.; Ferreira, A. C.; Andrade, L. M.

    2014-09-01

    We devise and test a set of tactile symbols to represent elements frequently used in mechanics diagrams, such as vectors, ropes, pulleys, blocks and surfaces, that can be used to adapt drawings of physics situations in textbooks for blind students. We also investigate how figures are described for blind students in classroom activities and exams, by interviewing three blind students using the oral history method. The symbols were tested at a specialized school for the blind. Our results indicate that, with training, blind students become familiar with the symbols and can identify them in a problem without the need for a spoken description. This educational product can help blind students to achieve the same conditions of autonomy as sighted ones when studying physics.

  14. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  15. Childhood Fears among Children Who Are Blind: The Perspective of Teachers Who Are Blind

    Science.gov (United States)

    Al-Zboon, Eman

    2017-01-01

    The aim of this study was to investigate childhood fears in children who are blind from the perspective of teachers who are blind. The study was conducted in Jordan. Forty-six teachers were interviewed. Results revealed that the main fear content in children who are blind includes fear of the unknown; environment-, transportation- and…

  16. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    OpenAIRE

    Apostolos Meliones; Demetrios Sampson

    2018-01-01

    Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils), which has primarily addressed blind or visually impaired (BVI) accessibility and self-guided tours in museums. A pilot prototype has been developed and is curr...

  17. Comment on the paper "Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution by G. Kitis, J.M. Gomez-Ros, Nuclear Instruments and Methods in Physics Research A 440, 2000, pp 224-231"

    Science.gov (United States)

    Kazakis, Nikolaos A.

    2018-01-01

    The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.

  18. Causes of blindness and career choice among pupils in a blind school; South Western Nigeria.

    Science.gov (United States)

    Fadamiro, Christianah Olufunmilayo

    2014-01-01

    The causes of Blindness vary from place to place with about 80% of it been avoidable. Furthermore Blind people face a lot of challenges in career choice thus limiting their economic potential and full integration into the society. This study aims at identifying the causes of blindness and career choice among pupils in a school for the blind in South -Western Nigeria. This is a descriptive study of causes of blindness and career choice among 38 pupils residing in a school for the blind at Ikere -Ekiti, South Western Nigeria. Thirty eight pupils comprising of 25 males (65.8%) and 13 females (34.2%) with age range from 6-39 years were seen for the study, The commonest cause of blindness was cataract with 14 cases (36.84%) while congenital glaucoma and infection had an equal proportion of 5 cases each (13.16%). Avoidable causes constituted the greatest proportion of the causes 27 (71.05%) while unavoidable causes accounted for 11 (28.9%). The law career was the most desired profession by the pupils 11 (33.3%) followed by Teaching 9 (27.3%), other desired profession includes engineering, journalism and farming. The greatest proportion of causes of blindness identified in this study is avoidable. There is the need to create public awareness on some of the notable causes particularly cataract and motivate the community to utilize available eye care services Furthermore there is need for career talk in schools for the blind to enable them choose career where their potential can be fully maximized.

  19. A survey of visual impairment and blindness in children attending seven schools for the blind in Myanmar.

    Science.gov (United States)

    Muecke, James; Hammerton, Michael; Aung, Yee Yee; Warrier, Sunil; Kong, Aimee; Morse, Anna; Holmes, Martin; Yapp, Michael; Hamilton, Carolyn; Selva, Dinesh

    2009-01-01

    To determine the causes of visual impairment and blindness amongst children in schools for the blind in Myanmar; to identify the avoidable causes of visual impairment and blindness; and to provide spectacles, low vision aids, orientation and mobility training and ophthalmic treatment where indicated. Two hundred and eight children under 16 years of age from all 7 schools for the blind in Myanmar were examined and the data entered into the World Health Organization Prevention of Blindness Examination Record for Childhood Blindness (WHO/PBL ERCB). One hundred and ninety nine children (95.7%) were blind (BL = Visual Acuity [VA] schools for the blind in Myanmar had potentially avoidable causes of SVI/BL. With measles being both the commonest identifiable and commonest avoidable cause, the data supports the need for a measles immunization campaign. There is also a need for a dedicated pediatric eye care center with regular ophthalmology visits to the schools, and improved optometric, low vision and orientation and mobility services in Myanmar.

  20. Solar control: comparison of two new systems with the state of the art on the basis of a new general evaluation method for facades with venetian blinds or other solar control systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, T.E. [Fraunhofer Institute for Solar Energy Systems ISE, Freiburg (Germany)

    2006-06-15

    The author has developed two new sun-shading systems together with two different companies. These systems are compared with state of the art products on the basis of the new general evaluation method for facades with venetian blinds or other solar control systems that has been presented in detail in [T.E. Kuhn, Solar control: a general evaluation method for facades with venetian blinds or other solar control systems to be used 'stand-alone' or within building simulation programs, Energy Buildings, in press]. The main advantage is that the method can take into account realistic user behaviour (different utilisation modes). Without the new methodology, it would not have been possible to recognise the weaknesses of products which are currently on the market. This recognition was the basis for the design of the two new products: 1. The new stainless steel blind s-enn(Registered Trademark) selectively shields certain regions of the sky. This leads to a transparent appearance while direct insolation of the room and the associated glare is prevented in most cases. 2. The special shape of the 'Genius slats' of a new venetian blind ensures good sun-shading properties which are relatively independent of the actual setting of the slats over broad ranges, which ensures robust performance despite so-called 'faulty operation'. In other words: The performance of the new venetian blind is relatively insensitive to different utilisation modes. (author)

  1. Blind source separation advances in theory, algorithms and applications

    CERN Document Server

    Wang, Wenwu

    2014-01-01

    Blind Source Separation intends to report the new results of the efforts on the study of Blind Source Separation (BSS). The book collects novel research ideas and some training in BSS, independent component analysis (ICA), artificial intelligence and signal processing applications. Furthermore, the research results previously scattered in many journals and conferences worldwide are methodically edited and presented in a unified form. The book is likely to be of interest to university researchers, R&D engineers and graduate students in computer science and electronics who wish to learn the core principles, methods, algorithms, and applications of BSS. Dr. Ganesh R. Naik works at University of Technology, Sydney, Australia; Dr. Wenwu Wang works at University of Surrey, UK.

  2. Prevalence and causes of blindness at a tertiary hospital in Douala, Cameroon

    Science.gov (United States)

    Eballé, André Omgbwa; Mvogo, Côme Ebana; Koki, Godefroy; Mounè, Nyouma; Teutu, Cyrille; Ellong, Augustin; Bella, Assumpta Lucienne

    2011-01-01

    Purpose The aim of this study was to determine the prevalence and causes of bilateral and unilateral blindness in the town of Douala and its environs based on data from the ophthalmic unit of a tertiary hospital in Douala. Methods We conducted a retrospective epidemiological survey of consultations at the eye unit of the Douala General Hospital over the last 20 years (from January 1, 1990 to December 31, 2009). Results Out of the 1927 cases of blindness, 1000 were unilateral, corresponding to a hospital prevalence of 1.84% and 927 cases were bilateral, corresponding to a hospital prevalence of 1.71%. No statistically significant difference was noted between the two (P = 0.14). The leading causes of bilateral blindness were cataract (50.1%), glaucoma (19.7%), and diabetic retinopathy (7.8%) while the leading causes of unilateral blindness were cataract (40.4%), glaucoma (14.1%), and retinal detachment (9.1%). Cataract (51.2%), cortical blindness (16.3%), and congenital glaucoma (10%) were the leading causes of bilateral blindness in children aged less than 10 years. Conclusion Blindness remains a public health problem in the Douala region with a hospital prevalence which is relatively higher than the national estimate given by the National Blindness Control Program. PMID:21966211

  3. The Cost of Blindness in the Republic of Ireland 2010–2020

    Directory of Open Access Journals (Sweden)

    D. Green

    2016-01-01

    Full Text Available Aims. To estimate the prevalence of blindness in the Republic of Ireland and the associated financial and total economic cost between 2010 and 2020. Methods. Estimates for the prevalence of blindness in the Republic of Ireland were based on blindness registration data from the National Council for the Blind of Ireland. Estimates for the financial and total economic cost of blindness were based on the sum of direct and indirect healthcare and nonhealthcare costs. Results. We estimate that there were 12,995 blind individuals in Ireland in 2010 and in 2020 there will be 17,997. We estimate that the financial and total economic costs of blindness in the Republic of Ireland in 2010 were €276.6 million and €809 million, respectively, and will increase in 2020 to €367 million and €1.1 billion, respectively. Conclusions. Here, ninety-eight percent of the cost of blindness is borne by the Departments of Social Protection and Finance and not by the Department of Health as might initially be expected. Cost of illness studies should play a role in public policy making as they help to quantify the indirect or “hidden” costs of disability and so help to reveal the true cost of illness.

  4. Blindness and visual impairment in opera.

    Science.gov (United States)

    Aydin, Pinar; Ritch, Robert; O'Dwyer, John

    2018-01-01

    The performing arts mirror the human condition. This study sought to analyze the reasons for inclusion of visually impaired characters in opera, the cause of the blindness or near blindness, and the dramatic purpose of the blindness in the storyline. We reviewed operas from the 18 th century to 2010 and included all characters with ocular problems. We classified the cause of each character's ocular problem (organic, nonorganic, and other) in relation to the thematic setting of the opera: biblical and mythical, blind beggars or blind musicians, historical (real or fictional characters), and contemporary or futuristic. Cases of blindness in 55 characters (2 as a choir) from 38 operas were detected over 3 centuries of repertoire: 11 had trauma-related visual impairment, 5 had congenital blindness, 18 had visual impairment of unknown cause, 9 had psychogenic or malingering blindness, and 12 were symbolic or miracle-related. One opera featured an ophthalmologist curing a patient. The research illustrates that visual impairment was frequently used as an artistic device to enhance the intent and situate an opera in its time.

  5. The first rapid assessment of avoidable blindness (RAAB in Thailand.

    Directory of Open Access Journals (Sweden)

    Saichin Isipradit

    Full Text Available BACKGROUND: The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB study in Thailand. METHODS: A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. RESULTS: The age and sex adjusted prevalence of blindness (presenting visual acuity (VA <20/400, severe visual impairment (VA <20/200 but ≥20/400, and moderate visual impairment (VA <20/70 but ≥20/200 were 0.6% (95% CI: 0.5-0.8, 1.3% (95% CI: 1.0-1.6, 12.6% (95% CI: 10.8-14.5. There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. CONCLUSION: Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system.

  6. New experimental and analysis methods in I-DLTS

    International Nuclear Information System (INIS)

    Pandey, S.U.; Middelkamp, P.; Li, Z.; Eremin, V.

    1998-02-01

    A new experimental apparatus to perform I-DLTS measurements is presented. The method is shown to be faster and more sensitive than traditional double boxcar I-DLTS systems. A novel analysis technique utilizing multiple exponential fits to the I-DLTS signal from a highly neutron irradiated silicon sample is presented with a discussion of the results. It is shown that the new method has better resolution and can deconvolute overlapping peaks more accurately than previous methods

  7. Using Digital Libraries Non-Visually: Understanding the Help-Seeking Situations of Blind Users

    Science.gov (United States)

    Xie, Iris; Babu, Rakesh; Joo, Soohyung; Fuller, Paige

    2015-01-01

    Introduction: This study explores blind users' unique help-seeking situations in interacting with digital libraries. In particular, help-seeking situations were investigated at both the physical and cognitive levels. Method: Fifteen blind participants performed three search tasks, including known- item search, specific information search, and…

  8. "Color-Blind" Racism.

    Science.gov (United States)

    Carr, Leslie G.

    Examining race relations in the United States from a historical perspective, this book explains how the constitution is racist and how color blindness is actually a racist ideology. It is argued that Justice Harlan, in his dissenting opinion in Plessy v. Ferguson, meant that the constitution and the law must remain blind to the existence of race…

  9. 42 CFR 436.531 - Determination of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Determination of blindness. 436.531 Section 436.531... Requirements for Medicaid Eligibility Blindness § 436.531 Determination of blindness. In determining blindness... determine on behalf of the agency— (1) Whether the individual meets the definition of blindness; and (2...

  10. Depth from Optical Turbulence

    Science.gov (United States)

    2012-01-01

    Dagobert, and C. Franchis . Atmospheric tur- bulence restoration by diffeomorphic image registration and blind deconvolution. In ACIVS, 2008. 1 [4] S...20] V. Tatarskii. Wave Propagation in a Turbulent Medium. McGraw-Hill Books, 1961. 2 [21] Y. Tian and S. Narasimhan. A globally optimal data-driven

  11. Is love blind? Sexual behavior and psychological adjustment of adolescents with blindness

    NARCIS (Netherlands)

    Kef, S.; Bos, H.

    2006-01-01

    In the present study, we examined sexual knowledge, sexual behavior, and psychological adjustment of adolescents with blindness. The sample included 36 Dutch adolescents who are blind, 16 males and 20 females. Results of the interviews revealed no problems regarding sexual knowledge or psychological

  12. Blind identification and separation of complex-valued signals

    CERN Document Server

    Moreau, Eric

    2013-01-01

    Blind identification consists of estimating a multi-dimensional system only through the use of its output, and source separation, the blind estimation of the inverse of the system. Estimation is generally carried out using different statistics of the output. The authors of this book consider the blind identification and source separation problem in the complex-domain, where the available statistical properties are richer and include non-circularity of the sources - underlying components. They define identifiability conditions and present state-of-the-art algorithms that are based on algebraic methods as well as iterative algorithms based on maximum likelihood theory. Contents 1. Mathematical Preliminaries. 2. Estimation by Joint Diagonalization. 3. Maximum Likelihood ICA. About the Authors Eric Moreau is Professor of Electrical Engineering at the University of Toulon, France. His research interests concern statistical signal processing, high order statistics and matrix/tensor decompositions with applic...

  13. Color blindness among multiple sclerosis patients in Isfahan

    Directory of Open Access Journals (Sweden)

    Vahid Shaygannejad

    2012-01-01

    Full Text Available Background: Multiple sclerosis (MS is a disease of young and middle aged individuals with a demyelinative axonal damage nature in central nervous system that causes various signs and symptoms. As color vision needs normal function of optic nerve and macula, it is proposed that MS can alter it via influencing optic nerve. In this survey, we evaluated color vision abnormalities and its relationship with history of optic neuritis and abnormal visual evoked potentials (VEPs among MS patients. Materials and Methods: The case group was included of clinically definitive MS patients and the same number of normal population was enrolled as the control group. Color vision of all the participants was evaluated by Ishihara test and then visual evoked potential (VEPs and history of optic neuritis (ON was assessed among them. Then, frequency of color blindness was compared between the case and the control group. Finally, color blinded patients were compared to those with the history of ON and abnormal VEPs. Results: 63 MS patients and the same number of normal populations were enrolled in this study. 12 patients had color blindness based on the Ishihara test; only 3 of them were among the control group, which showed a significant different between the two groups (P = 0.013. There was a significant relationship between the color blindness and abnormal VEP (R = 0.53, P = 0.023 but not for the color blindness and ON (P = 0.67. Conclusions: This study demonstrates a significant correlation between color blindness and multiple sclerosis including ones with abnormal prolonged VEP latencies. Therefore, in individuals with acquired color vision impairment, an evaluation for potentially serious underlying diseases like MS is essential.

  14. Poverty and Blindness in Nigeria: Results from the National Survey of Blindness and Visual Impairment.

    Science.gov (United States)

    Tafida, A; Kyari, F; Abdull, M M; Sivasubramaniam, S; Murthy, G V S; Kana, I; Gilbert, Clare E

    2015-01-01

    Poverty can be a cause and consequence of blindness. Some causes only affect the poorest communities (e.g. trachoma), and poor individuals are less likely to access services. In low income countries, cataract blind adults have been shown to be less economically active, indicating that blindness can exacerbate poverty. This study aims to explore associations between poverty and blindness using national survey data from Nigeria. Participants ≥40 years were examined in 305 clusters (2005-2007). Sociodemographic information, including literacy and occupation, was obtained by interview. Presenting visual acuity (PVA) was assessed using a reduced tumbling E LogMAR chart. Full ocular examination was undertaken by experienced ophthalmologists on all with PVA blind (PVA blindness were 8.5% (95% CI 7.7-9.5%), 2.5% (95% CI 2.0-3.1%), and 1.5% (95% CI 1.2-2.0%) in poorest, medium and affluent households, respectively (p = 0.001). Cause-specific prevalences of blindness from cataract, glaucoma, uncorrected aphakia and corneal opacities were significantly higher in poorer households. Cataract surgical coverage was low (37.2%), being lowest in females in poor households (25.3%). Spectacle coverage was 3 times lower in poor than affluent households (2.4% vs. 7.5%). In Nigeria, blindness is associated with poverty, in part reflecting lower access to services. Reducing avoidable causes will not be achieved unless access to services improves, particularly for the poor and women.

  15. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  16. Discovery of Nine Gamma-Ray Pulsars in Fermi-Lat Data Using a New Blind Search Method

    Science.gov (United States)

    Celik-Tinmaz, Ozlem; Ferrara, E. C.; Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; hide

    2011-01-01

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs Jl803-2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 x 10(exp 35) ergs per second and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J010622+3749, Jl620-4927, Jl746-3239, J2028+3332,J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| greater than 10 degrees). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2x 10(exp 11)G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 x l0(exp 33) erg per second) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  17. Double-blind, placebo-controlled food challenge with apple

    DEFF Research Database (Denmark)

    Skamstrup Hansen, K; Vestergaard, H; Stahl Skov, P

    2001-01-01

    The aim of the study was to develop and evaluate different methods of double-blind, placebo-controlled food challenge (DBPCFC) with apple. Three different DBPCFC models were evaluated: fresh apple juice, freshly grated apple, and freeze-dried apple powder. All challenges were performed outside...... frequency of reactions to placebo, probably due to the ingredients used for blinding. The sensitivity of the models with freshly grated apple and freeze-dried apple powder was 0.74/0.60. An increase in sensitivity is desirable. The freeze-dried apple powder proved to be useful for SPT, HR, and oral...

  18. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  19. A novel deconvolution method for modeling UDP-N-acetyl-D-glucosamine biosynthetic pathways based on 13C mass isotopologue profiles under non-steady-state conditions

    Directory of Open Access Journals (Sweden)

    Belshoff Alex C

    2011-05-01

    Full Text Available Abstract Background Stable isotope tracing is a powerful technique for following the fate of individual atoms through metabolic pathways. Measuring isotopic enrichment in metabolites provides quantitative insights into the biosynthetic network and enables flux analysis as a function of external perturbations. NMR and mass spectrometry are the techniques of choice for global profiling of stable isotope labeling patterns in cellular metabolites. However, meaningful biochemical interpretation of the labeling data requires both quantitative analysis and complex modeling. Here, we demonstrate a novel approach that involved acquiring and modeling the timecourses of 13C isotopologue data for UDP-N-acetyl-D-glucosamine (UDP-GlcNAc synthesized from [U-13C]-glucose in human prostate cancer LnCaP-LN3 cells. UDP-GlcNAc is an activated building block for protein glycosylation, which is an important regulatory mechanism in the development of many prominent human diseases including cancer and diabetes. Results We utilized a stable isotope resolved metabolomics (SIRM approach to determine the timecourse of 13C incorporation from [U-13C]-glucose into UDP-GlcNAc in LnCaP-LN3 cells. 13C Positional isotopomers and isotopologues of UDP-GlcNAc were determined by high resolution NMR and Fourier transform-ion cyclotron resonance-mass spectrometry. A novel simulated annealing/genetic algorithm, called 'Genetic Algorithm for Isotopologues in Metabolic Systems' (GAIMS was developed to find the optimal solutions to a set of simultaneous equations that represent the isotopologue compositions, which is a mixture of isotopomer species. The best model was selected based on information theory. The output comprises the timecourse of the individual labeled species, which was deconvoluted into labeled metabolic units, namely glucose, ribose, acetyl and uracil. The performance of the algorithm was demonstrated by validating the computed fractional 13C enrichment in these subunits

  20. Botulinum toxin in the treatment of orofacial tardive dyskinesia : A single blind study

    NARCIS (Netherlands)

    Slotema, Christina W.; van Harten, Peter N.; Bruggeman, Richard; Hoek, Hans W.

    2008-01-01

    Objective: Orofacial tardive dyskinesia (OTD) is difficult to treat and Botulinium Toxin A (BTA) may be an option. Methods: In a single blind (raters were blind) study (N= 12, duration 33 weeks) OTD was treated with Botulinum Toxin A in three consecutive sessions with increasing dosages. The

  1. Real-time adaptive concepts in acoustics blind signal separation and multichannel echo cancellation

    CERN Document Server

    Schobben, Daniel W E

    2001-01-01

    Blind Signal Separation (BSS) deals with recovering (filtered versions of) source signals from an observed mixture thereof. The term `blind' relates to the fact that there are no reference signals for the source signals and also that the mixing system is unknown. This book presents a new method for blind signal separation, which is developed to work on microphone signals. Acoustic Echo Cancellation (AEC) is a well-known technique to suppress the echo that a microphone picks up from a loudspeaker in the same room. Such acoustic feedback occurs for example in hands-free telephony and can lead to a perceived loud tone. For an application such as a voice-controlled television, a stereo AEC is required to suppress the contribution of the stereo loudspeaker setup. A generalized AEC is presented that is suited for multi-channel operation. New algorithms for Blind Signal Separation and multi-channel Acoustic Echo Cancellation are presented. A background is given in array signal processing methods, adaptive filter the...

  2. Blind Cat

    Directory of Open Access Journals (Sweden)

    Arka Chattopadhyay

    2015-08-01

    There’s no way to know whether he was blind from birth or blindness was something he had picked up from his fights with other cats. He wasn’t an urban cat. He lived in a little village, soaked in the smell of fish with a river running right beside it. Cats like these have stories of a different kind. The two-storied hotel where he lived had a wooden floor. It stood right on the riverbank and had more than a tilt towards the river, as if deliberately leaning on the water.

  3. The use of deconvolution techniques to identify the fundamental mixing characteristics of urban drainage structures.

    Science.gov (United States)

    Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G

    2010-01-01

    Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.

  4. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  5. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    Science.gov (United States)

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  6. The blind hens’ challenge

    DEFF Research Database (Denmark)

    Sandøe, Peter; Hocking, Paul M.; Forkman, Björn

    2014-01-01

    about breeding blind hens. But we also argue that alternative views, which (for example) claim that it is important to respect the telos or rights of an animal, do not offer a more convincing solution to questions raised by the possibility of disenhancing animals for their own benefit.......Animal ethicists have recently debated the ethical questions raised by disenhancing animals to improve their welfare. Here, we focus on the particular case of breeding blind hens for commercial egg-laying systems, in order to benefit their welfare. Many people find breeding blind hens intuitively...

  7. Clinical governance breakdown: Australian cases of wilful blindness and whistleblowing.

    Science.gov (United States)

    Cleary, Sonja; Duke, Maxine

    2017-01-01

    After their attempts to have patient safety concerns addressed internally were ignored by wilfully blind managers, nurses from Bundaberg Base Hospital and Macarthur Health Service felt compelled to 'blow the whistle'. Wilful blindness is the human desire to prefer ignorance to knowledge; the responsibility to be informed is shirked. To provide an account of instances of wilful blindness identified in two high-profile cases of nurse whistleblowing in Australia. Critical case study methodology using Fay's Critical Social Theory to examine, analyse and interpret existing data generated by the Commissions of Inquiry held into Bundaberg Base Hospital and Macarthur Health Service patient safety breaches. All data was publicly available and assessed according to the requirements of unobtrusive research methods and secondary data analysis. Ethical considerations: Data collection for the case studies relied entirely on publicly available documentary sources recounting and detailing past events. Data from both cases reveal managers demonstrating wilful blindness towards patient safety concerns. Concerns were unaddressed; nurses, instead, experienced retaliatory responses leading to a 'social crisis' in the organisation and to whistleblowing. Managers tasked with clinical governance must be aware of mechanisms with the potential to blind them. The human tendency to favour positive news and avoid conflict is powerful. Understanding wilful blindness can assist managers' awareness of the competing emotions occurring in response to ethical challenges, such as whistleblowing.

  8. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  9. On imitation among young and blind children

    Directory of Open Access Journals (Sweden)

    Maria Rita Campello Rodrigues

    2016-07-01

    Full Text Available This article investigates the imitation among young and blind children. The survey was conducted as a mosaic in the time since the field considerations were taken from two areas: a professional experience with early stimulation of blind babies and a workshop with blind and low vision young between 13-18 years. By statingthe situated trace of knowledge, theresearch indicates that imitation among blind young people can be one of the ways of creating a common world among young blind and sighted people. Imitation among blind young is a multi-sensory process that requires a body experience, including both blind and people who see. The paper concludes with an indication of the unique character of imitation and at the same time, with the affirmation of its relevance to the development and inclusion process of both the child and the young blind.

  10. Blind MuseumTourer: A System for Self-Guided Tours in Museums and Blind Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Apostolos Meliones

    2018-01-01

    Full Text Available Notably valuable efforts have focused on helping people with special needs. In this work, we build upon the experience from the BlindHelper smartphone outdoor pedestrian navigation app and present Blind MuseumTourer, a system for indoor interactive autonomous navigation for blind and visually impaired persons and groups (e.g., pupils, which has primarily addressed blind or visually impaired (BVI accessibility and self-guided tours in museums. A pilot prototype has been developed and is currently under evaluation at the Tactual Museum with the collaboration of the Lighthouse for the Blind of Greece. This paper describes the functionality of the application and evaluates candidate indoor location determination technologies, such as wireless local area network (WLAN and surface-mounted assistive tactile route indications combined with Bluetooth low energy (BLE beacons and inertial dead-reckoning functionality, to come up with a reliable and highly accurate indoor positioning system adopting the latter solution. The developed concepts, including map matching, a key concept for indoor navigation, apply in a similar way to other indoor guidance use cases involving complex indoor places, such as in hospitals, shopping malls, airports, train stations, public and municipality buildings, office buildings, university buildings, hotel resorts, passenger ships, etc. The presented Android application is effectively a Blind IndoorGuide system for accurate and reliable blind indoor navigation.

  11. Rolling bearing fault diagnosis based on time-delayed feedback monostable stochastic resonance and adaptive minimum entropy deconvolution

    Science.gov (United States)

    Li, Jimeng; Li, Ming; Zhang, Jinfeng

    2017-08-01

    Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.

  12. A novel optimised and validated method for analysis of multi-residues of pesticides in fruits and vegetables by microwave-assisted extraction (MAE)-dispersive solid-phase extraction (d-SPE)-retention time locked (RTL)-gas chromatography-mass spectrometry with Deconvolution reporting software (DRS).

    Science.gov (United States)

    Satpathy, Gouri; Tyagi, Yogesh Kumar; Gupta, Rajinder Kumar

    2011-08-01

    A rapid, effective and ecofriendly method for sensitive screening and quantification of 72 pesticides residue in fruits and vegetables, by microwave-assisted extraction (MAE) followed by dispersive solid-phase extraction (d-SPE), retention time locked (RTL) capillary gas-chromatographic separation in trace ion mode mass spectrometric determination has been validated as per ISO/IEC: 17025:2005. Identification and reporting with total and extracted ion chromatograms were facilitated to a great extent by Deconvolution reporting software (DRS). For all compounds LOD were 0.002-0.02mg/kg and LOQ were 0.025-0.100mg/kg. Correlation coefficients of the calibration curves in the range of 0.025-0.50mg/kg were >0.993. To validate matrix effects repeatability, reproducibility, recovery and overall uncertainty were calculated for the 35 matrices at 0.025, 0.050 and 0.100mg/kg. Recovery ranged between 72% and 114% with RSD of <20% for repeatability and intermediate precision. The reproducibility of the method was evaluated by an inter laboratory participation and Z score obtained within ±2. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. The Blind Identification of Multi-Inputs and Multi-Outputs Shallow-Water Acoustic Channel

    International Nuclear Information System (INIS)

    Li, R Y; Zhou, J H; Wang, L

    2006-01-01

    Blind channel identification/estimation is very important for object detection, trace, localization in the ocean acoustics. Time domain blind identification algorithm requiring exact length of the channel being identification. Due to the characteristics of the shallow-water channel, the length of channel impulse response sequence is uncertain, Hence a frequency domain method for the blind MIMO (Multiple-Input Multiple-Output) underwater identification based on higher order statistics (HOS) is used to estimate the original acoustic channel from received signals on hydrophones only, with the low signal to noise ratio (SNR). The simulation results in the acoustic environment proved this work is effective and efficient for blind identification of the shallow-water acoustic channel

  14. A STUDY ON PREVALENCE AND CAUSES OF CORNEAL BLINDNESS IN PAEDIATRIC AGE GROUP

    Directory of Open Access Journals (Sweden)

    E. Ramadevi

    2017-12-01

    Full Text Available BACKGROUND Corneal disease is responsible for less than 2% of blindness in children in industrialised countries. In poor countries of the world, corneal scarring occurs due to vitamin A deficiency, measles and ophthalmia neonatorum. Thus, corneal disease is an important cause of blindness among children living in developing nations, which already carry a major burden of blindness. The aim of the study is to study the1. Prevalence of corneal blindness in the paediatric age group. 2. Causes of corneal blindness in the paediatric age group. 3. Morbidity of corneal blindness in the paediatric age group. MATERIALS AND METHODS It was cross-sectional observational study. Study Period- December 2014 to August 2016. Study Done- Government General Hospital, Kakinada. Sample Size- 50 patients. Inclusion Criteria- Children of age group 6 to 12 years with corneal blindness who have attended the outpatient department during the study period. Exclusion Criteria- Children with childhood blindness other than corneal pathology. Study Tools- Predesigned, semi-structured questionnaire regarding age, sex and age of onset of visual loss, laterality, history of ocular injury, vitamin A immunisation, family history of consanguinity and place of residence and socioeconomic status was taken. Visual acuity was measured using an E optotype and Landolt broken C chart with best corrected vision. Visual loss was classified according to the WHO categories of visual impairment. Ophthalmic examination was done by slit lamp and B scan. RESULTS Ocular trauma and corneal ulcers are most common cause of corneal blindness. 84% of corneal blindness cases were preventable and curable. CONCLUSION Trauma was the commonest cause of corneal blindness followed by infectious keratitis. 84% of corneal blindness was preventable and curable. Most causes of corneal blindness were avoidable.

  15. Building a future full of opportunity for blind youths in science

    Science.gov (United States)

    Beck-Winchatz, B.; Riccobono, M. A.

    Like their sighted peers many blind students in elementary middle and high school are naturally interested in space This interest can motivate them to learn fundamental scientific quantitative and critical thinking skills and sometimes even lead to careers in SMET disciplines However these students are often at a disadvantage in science because of the ubiquity of important graphical information that is generally not available in accessible formats the unfamiliarity of teachers with non-visual teaching methods lack of access to blind role models and the low expectations of their teachers and parents In this presentation we will describe joint efforts by NASA and the National Federation of the Blind s NFB Center for Blind Youth in Science to develop and implement strategies to promote opportunities for blind youth in science These include the development of tactile space science books and curriculum materials science academies for blind middle school and high school students internship and mentoring programs as well as research on non-visual learning techniques This partnership with the NFB exemplifies the effectiveness of collaborations between NASA and consumer-directed organizations to improve opportunities for underserved and underrepresented individuals Session participants will also have the opportunity to examine some of the recently developed tactile space science education materials themselves

  16. Comparison of Arthroscopically Guided Suprascapular Nerve Block and Blinded Axillary Nerve Block vs. Blinded Suprascapular Nerve Block in Arthroscopic Rotator Cuff Repair: A Randomized Controlled Trial

    OpenAIRE

    Ko, Sang Hun; Cho, Sung Do; Lee, Chae Chil; Choi, Jang Kyu; Kim, Han Wook; Park, Seon Jae; Bae, Mun Hee; Cha, Jae Ryong

    2017-01-01

    Background The purpose of this study was to compare the results of arthroscopically guided suprascapular nerve block (SSNB) and blinded axillary nerve block with those of blinded SSNB in terms of postoperative pain and satisfaction within the first 48 hours after arthroscopic rotator cuff repair. Methods Forty patients who underwent arthroscopic rotator cuff repair for medium-sized full thickness rotator cuff tears were included in this study. Among them, 20 patients were randomly assigned to...

  17. Detecting Signage and Doors for Blind Navigation and Wayfinding.

    Science.gov (United States)

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-07-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.

  18. Psychologica and social adjustment to blindness: Understanding ...

    African Journals Online (AJOL)

    Psychologica and social adjustment to blindness: Understanding from two groups of blind people in Ilorin, Nigeria. ... Background: Blindness can cause psychosocial distress leading to maladjustment if not mitigated. Maladjustment is a secondary burden that further reduces quality of life of the blind. Adjustment is often ...

  19. Effects of thermal treatment on the MgxZn1−xO films and fabrication of visible-blind and solar-blind ultraviolet photodetectors

    International Nuclear Information System (INIS)

    Tian, Chunguang; Jiang, Dayong; Tan, Zhendong; Duan, Qian; Liu, Rusheng; Sun, Long; Qin, Jieming; Hou, Jianhua; Gao, Shang; Liang, Qingcheng; Zhao, Jianxun

    2014-01-01

    Highlights: • Single-phase wurtzite/cubic Mg x Zn 1−x O films were grown by RF magnetron sputtering technique. • We focus on the red-shift caused by annealing the Mg x Zn 1−x O films. • MSM-structured visible-blind and solar-blind UV photodetectors were fabricated. - Abstract: A series of single-phase Mg x Zn 1−x O films with different Mg contents were prepared on quartz substrates by RF magnetron sputtering technique using different MgZnO targets, and annealed under the atmospheric environment. The absorption edges of Mg x Zn 1−x O films can cover the whole near ultraviolet and even the whole solar-blind spectra range, and the solar-blind wurtzite/cubic Mg x Zn 1−x O films have been realized successfully by the same method. In addition, the absorption edges of annealed films shift to a long wavelength, which is caused by the diffusion of Zn atoms gathering at the surface during the thermal treatment process. Finally, the truly solar-blind metal-semiconductor-metal structured photodetectors based on wurtzite Mg 0.445 Zn 0.555 O and cubic Mg 0.728 Zn 0.272 O films were fabricated. The corresponding peak responsivities are 17 mA/W at 275 nm and 0.53 mA/W at 250 nm under a 120 V bias, respectively

  20. The impact of exercise therapy on the musculoskeletal abnormalities of blind boy students of 12- 18 years old at Tehran Mohebbi blind school

    Directory of Open Access Journals (Sweden)

    Habib Allah Jadidi

    2009-08-01

    Full Text Available Introduction: The purpose of this study was to examine the 13 musculoskeletal abnormalities (fronthead, lateral bending head, shoulder dropping, scoliosis, kyphosis, lumbar lordosis, flat back,pelvicobliguity, genu varum, x.leg, flat foot, pes cavus, and hallux valgus after a period of exercisetherapy on the blind boy students without secondary disability.Materials and Methods: In this semi-experimental research, 60 boy students were included fromsecondary and high school (12-18 years old including 34 congenital blind and 26 semi blind. Theywere selected among 135 students at Tehran Mohebbi blind school. They were tested by measurementtools (symetrigraph, antropometer, and podioscope. After examining the results by the New York test,the students who were diagnosed with one or more musculoskeletal abnormalities took part in fourmonth’sexercises with 3 sessions at weak. The results were registered after the end of the exerciseprogram and administered secondary exam. The data before and after the exam were analyzed.Results: 80 percents of the blind students at pre-exam had musculoskeletal abnormalities which aredecreased to 45 percent after exercises. There were significant differences on the rate of recovery at 11abnormalities (Exact – Sign = 0 < 0/05 and there were not significant differences at pelvicobliguityand x.leg abnormalities (Exact – Sign = 1 < 0/05.Conclusion: the research findings emphasized on the validation and important of exercise therapyon musculoskeletal abnormalities.

  1. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  2. Validation of methods for measurement of insulin secretion in humans in vivo

    DEFF Research Database (Denmark)

    Kjems, L L; Christiansen, E; Vølund, A

    2000-01-01

    To detect and understand the changes in beta-cell function in the pathogenesis of type 2 diabetes, an accurate and precise estimation of prehepatic insulin secretion rate (ISR) is essential. There are two common methods to assess ISR, the deconvolution method (by Eaton and Polonsky)-considered th......To detect and understand the changes in beta-cell function in the pathogenesis of type 2 diabetes, an accurate and precise estimation of prehepatic insulin secretion rate (ISR) is essential. There are two common methods to assess ISR, the deconvolution method (by Eaton and Polonsky...... of these mathematical techniques for quantification of insulin secretion have been tested in dogs, but not in humans. In the present studies, we examined the validity of both methods to recover the known infusion rates of insulin and C-peptide mimicking ISR during an oral glucose tolerance test. ISR from both......, and a close agreement was found for the results of an oral glucose tolerance test. We also studied whether C-peptide kinetics are influenced by somatostatin infusion. The decay curves after bolus injection of exogenous biosynthetic human C-peptide, the kinetic parameters, and the metabolic clearance rate were...

  3. Blinding for unanticipated signatures

    NARCIS (Netherlands)

    D. Chaum (David)

    1987-01-01

    textabstractPreviously known blind signature systems require an amount of computation at least proportional to the number of signature types, and also that the number of such types be fixed in advance. These requirements are not practical in some applications. Here, a new blind signature technique

  4. The sensory construction of dreams and nightmare frequency in congenitally blind and late blind individuals.

    Science.gov (United States)

    Meaidi, Amani; Jennum, Poul; Ptito, Maurice; Kupers, Ron

    2014-05-01

    We aimed to assess dream content in groups of congenitally blind (CB), late blind (LB), and age- and sex-matched sighted control (SC) participants. We conducted an observational study of 11 CB, 14 LB, and 25 SC participants and collected dream reports over a 4-week period. Every morning participants filled in a questionnaire related to the sensory construction of the dream, its emotional and thematic content, and the possible occurrence of nightmares. We also assessed participants' ability of visual imagery during waking cognition, sleep quality, and depression and anxiety levels. All blind participants had fewer visual dream impressions compared to SC participants. In LB participants, duration of blindness was negatively correlated with duration, clarity, and color content of visual dream impressions. CB participants reported more auditory, tactile, gustatory, and olfactory dream components compared to SC participants. In contrast, LB participants only reported more tactile dream impressions. Blind and SC participants did not differ with respect to emotional and thematic dream content. However, CB participants reported more aggressive interactions and more nightmares compared to the other two groups. Our data show that blindness considerably alters the sensory composition of dreams and that onset and duration of blindness plays an important role. The increased occurrence of nightmares in CB participants may be related to a higher number of threatening experiences in daily life in this group. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Using the computerized glow curve deconvolution method and the R package tgcd to determination of thermoluminescence kinetic parameters of chilli powder samples by GOK model and OTOR one

    Energy Technology Data Exchange (ETDEWEB)

    Sang, Nguyen Duy, E-mail: ndsang@ctu.edu.vn [College of Rural Development, Can Tho University, Can Tho 270000 (Viet Nam); Faculty of Physics and Engineering Physics, University of Science, Ho Chi Minh 700000 (Viet Nam); Van Hung, Nguyen [Nuclear Research Institute, VAEI, Dalat 670000 (Viet Nam); Van Hung, Tran; Hien, Nguyen Quoc [Research and Development Center for Radiation Technology, VAEI, Ho Chi Minh 700000 (Viet Nam)

    2017-03-01

    Highlights: • TL analysis aims to calculate the kinetic parameters of the chilli powder. • There is difference of the kinetic parameters caused by the difference of radiation doses. • There is difference of the kinetic parameters due to applying GOK model or OTOR one. • The software R is apllied for the first time in TL glow curve analysis of the chilli powder. - Abstract: The kinetic parameters of thermoluminescence (TL) glow peaks of chilli powder irradiated by gamma rays with the different doses of 0, 4 and 8 kGy have been calculated and estimate by computerized glow curve deconvolution (CGCD) method and the R package tgcd by using the TL glow curve data. The kinetic parameters of TL glow peaks (i.e. activation energies (E), order of kinetics (b), trapping and recombination probability coefficients (R) and frequency factors (s)) are fitted by modeled general-orders of kinetics (GOK) and one trap-one recombination (OTOR). The kinetic parameters of the chilli powder are different toward the difference of the sample time-storage, radiation doses, GOK model and OTOR one. The samples spending the shorter period of storage time have the smaller the kinetic parameters values than the samples spending the longer period of storage. The results obtained as comparing the kinetic parameters values of the three samples show that the value of non-irradiated samples are lowest whereas the 4 kGy irradiated-samples’ value are greater than the 8 kGy irradiated-samples’ one time.

  6. Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution.

    Directory of Open Access Journals (Sweden)

    J M Portegies

    Full Text Available We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI data via constrained spherical deconvolution (CSD. Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC, a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning.

  7. Blind Separation of Event-Related Brain Responses into Independent Components

    National Research Council Canada - National Science Library

    Makeig, Scott

    1996-01-01

    .... We report here a method for the blind separation of event-related brain responses into spatially stationary and temporally independent subcomponents using an Independent Component Analysis algorithm...

  8. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The NUC and blind pixel eliminating in the DTDI application

    Science.gov (United States)

    Su, Xiao Feng; Chen, Fan Sheng; Pan, Sheng Da; Gong, Xue Yi; Dong, Yu Cui

    2013-12-01

    AS infrared CMOS Digital TDI (Time Delay and integrate) has a simple structure, excellent performance and flexible operation, it has been used in more and more applications. Because of the limitation of the Production process level, the plane array of the infrared detector has a large NU (non-uniformity) and a certain blind pixel rate. Both of the two will raise the noise and lead to the TDI works not very well. In this paper, for the impact of the system performance, the most important elements are analyzed, which are the NU of the optical system, the NU of the Plane array and the blind pixel in the Plane array. Here a reasonable algorithm which considers the background removal and the linear response model of the infrared detector is used to do the NUC (Non-uniformity correction) process, when the infrared detector array is used as a Digital TDI. In order to eliminate the impact of the blind pixel, the concept of surplus pixel method is introduced in, through the method, the SNR (signal to noise ratio) can be improved and the spatial and temporal resolution will not be changed. Finally we use a MWIR (Medium Ware Infrared) detector to do the experiment and the result proves the effectiveness of the method.

  10. 42 CFR 435.530 - Definition of blindness.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Definition of blindness. 435.530 Section 435.530... ISLANDS, AND AMERICAN SAMOA Categorical Requirements for Eligibility Blindness § 435.530 Definition of blindness. (a) Definition. The agency must use the same definition of blindness as used under SSI, except...

  11. "VisionTouch Phone" for the Blind.

    Science.gov (United States)

    Yong, Robest

    2013-10-01

    Our objective is to enable the blind to use smartphones with touchscreens to make calls and to send text messages (sms) with ease, speed, and accuracy. We believe that with our proposed platform, which enables the blind to locate the position of the keypads, new games and education, and safety applications will be increasingly developed for the blind. This innovative idea can also be implemented on tablets for the blind, allowing them to use information websites such as Wikipedia and newspaper portals.

  12. Motor development of blind toddler

    OpenAIRE

    Likar, Petra

    2013-01-01

    For blind toddlers, development of motor skills enables possibilities for learning and exploring the environment. The purpose of this graduation thesis is to systematically mark the milestones in development of motor skills in blind toddlers, to establish different factors which affect this development, and to discover different ways for teachers for visually impaired and parents to encourage development of motor skills. It is typical of blind toddlers that they do not experience a wide varie...

  13. Causes of Severe Visual Impairment and Blindness: Comparative Data From Bhutanese and Laotian Schools for the Blind.

    Science.gov (United States)

    Farmer, Lachlan David Mailey; Ng, Soo Khai; Rudkin, Adam; Craig, Jamie; Wangmo, Dechen; Tsang, Hughie; Southisombath, Khamphoua; Griffiths, Andrew; Muecke, James

    2015-01-01

    To determine and compare the major causes of childhood blindness and severe visual impairment in Bhutan and Laos. Independent cross-sectional surveys. This survey consists of 2 cross-sectional observational studies. The Bhutanese component was undertaken at the National Institute for Vision Impairment, the only dedicated school for the blind in Bhutan. The Laotian study was conducted at the National Ophthalmology Centre and Vientiane School for the Blind. Children younger than age 16 were invited to participate. A detailed history and examination were performed consistent with the World Health Organization Prevention of Blindness Eye Examination Record. Of the 53 children examined in both studies, 30 were from Bhutan and 23 were from Laos. Forty percent of Bhutanese and 87.1% of Laotian children assessed were blind, with 26.7% and 4.3%, respectively, being severely visually impaired. Congenital causes of blindness were the most common, representing 45% and 43.5% of the Bhutanese and Laotian children, respectively. Anatomically, the primary site of blinding pathology differed between the cohorts. In Bhutan, the lens comprised 25%, with whole globe at 20% and retina at 15%, but in Laos, whole globe and cornea equally contributed at 30.4%, followed by retina at 17.4%. There was an observable difference in the rates of blindness/severe visual impairment due to measles, with no cases observed in the Bhutanese children but 20.7% of the total pathologies in the Laotian children attributable to congenital measles infection. Consistent with other studies, there is a high rate of blinding disease, which may be prevented, treated, or ameliorated.

  14. Causes of childhood blindness in the northeastern states of India

    Directory of Open Access Journals (Sweden)

    Bhattacharjee Harsha

    2008-01-01

    Full Text Available Background: The northeastern region (NER of India is geographically isolated and ethno-culturally different from the rest of the country. There is lacuna regarding the data on causes of blindness and severe visual impairment in children from this region. Aim: To determine the causes of severe visual impairment and blindness amongst children from schools for the blind in the four states of NER of India. Design and Setting: Survey of children attending special education schools for the blind in the NER. Materials and Methods: Blind and severely visually impaired children (best corrected visual acuity < 20/200 in the better eye, aged up to 16 years underwent visual acuity estimation, external ocular examination, retinoscopy and fundoscopy. Refraction and low vision workup was done where indicated. World Health Organization′s reporting form was used to code anatomical and etiological causes of visual loss. Statistical Analysis: Microsoft Excel Windows software with SPSS. Results: A total of 376 students were examined of whom 258 fulfilled the eligibility criteria. The major anatomical causes of visual loss amongst the 258 were congenital anomalies (anophthalmos, microphthalmos 93 (36.1%; corneal conditions (scarring, vitamin A deficiency 94 (36.7%; cataract or aphakia 28 (10.9%, retinal disorders 15 (5.8% and optic atrophy 14 (5.3%. Nearly half of the children were blind from conditions which were either preventable or treatable (48.5%. Conclusion: Nearly half the childhood blindness in the NER states of India is avoidable and Vitamin A deficiency forms an important component unlike other Indian states. More research and multisectorial effort is needed to tackle congenital anomalies.

  15. 20 CFR 416.983 - How we evaluate statutory blindness.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false How we evaluate statutory blindness. 416.983... AGED, BLIND, AND DISABLED Determining Disability and Blindness Blindness § 416.983 How we evaluate statutory blindness. We will find that you are blind if you are statutorily blind within the meaning of...

  16. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Science.gov (United States)

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  17. Causes of severe visual impairment and blindness in students in schools for the blind in Northwest Ethiopia.

    Science.gov (United States)

    Asferaw, Mulusew; Woodruff, Geoffrey; Gilbert, Clare

    2017-01-01

    To determine the causes of severe visual impairment and blindness (SVI/BL) among students in schools for the blind in Northwest Ethiopia and to identify preventable and treatable causes. Students attending nine schools for the blind in Northwest Ethiopia were examined and causes assigned using the standard WHO record form for children with blindness and low vision in May and June 2015. 383 students were examined, 357 (93%) of whom were severely visually impaired or blind (blind and four were SVI, total 104. The major anatomical site of visual loss among those 0-15 years was cornea/phthisis (47.1%), usually due to measles and vitamin A deficiency, followed by whole globe (22.1%), lens (9.6%) and uvea (8.7%). Among students aged 16 years and above, corneal/phthisis (76.3%) was the major anatomical cause, followed by lens (6.3%), whole globe (4.7%), uvea (3.6%) and optic nerve (3.2%). The leading underlying aetiology among students aged blindness, mainly as the result of measles and vitamin A deficiency, is still a public health problem in Northwest Ethiopia, and this has not changed as observed in other low-income countries. More than three-fourth of causes of SVI/BL in students in schools for the blind are potentially avoidable, with measles/vitamin A deficiency and cataract being the leading causes.

  18. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  19. SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution

    Science.gov (United States)

    Vallée, M.; Charléty, J.; Ferreira, A. M. G.; Delouis, B.; Vergoz, J.

    2011-01-01

    Accurate and fast magnitude determination for large, shallow earthquakes is of key importance for post-seismic response and tsumami alert purposes. When no local real-time data are available, which is today the case for most subduction earthquakes, the first information comes from teleseismic body waves. Standard body-wave methods give accurate magnitudes for earthquakes up to Mw= 7-7.5. For larger earthquakes, the analysis is more complex, because of the non-validity of the point-source approximation and of the interaction between direct and surface-reflected phases. The latter effect acts as a strong high-pass filter, which complicates the magnitude determination. We here propose an automated deconvolutive approach, which does not impose any simplifying assumptions about the rupture process, thus being well adapted to large earthquakes. We first determine the source duration based on the length of the high frequency (1-3 Hz) signal content. The deconvolution of synthetic double-couple point source signals—depending on the four earthquake parameters strike, dip, rake and depth—from the windowed real data body-wave signals (including P, PcP, PP, SH and ScS waves) gives the apparent source time function (STF). We search the optimal combination of these four parameters that respects the physical features of any STF: causality, positivity and stability of the seismic moment at all stations. Once this combination is retrieved, the integration of the STFs gives directly the moment magnitude. We apply this new approach, referred as the SCARDEC method, to most of the major subduction earthquakes in the period 1990-2010. Magnitude differences between the Global Centroid Moment Tensor (CMT) and the SCARDEC method may reach 0.2, but values are found consistent if we take into account that the Global CMT solutions for large, shallow earthquakes suffer from a known trade-off between dip and seismic moment. We show by modelling long-period surface waves of these events that

  20. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...