WorldWideScience

Sample records for block-circulant deconvolution matrix

  1. Matrix-free constructions of circulant and block circulant preconditioners

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chao; Ng, Esmond G.; Penczek, Pawel A.

    2001-12-01

    A framework for constructing circulant and block circulant preconditioners (C) for a symmetric linear system Ax=b arising from certain signal and image processing applications is presented in this paper. The proposed scheme does not make explicit use of matrix elements of A. It is ideal for applications in which A only exists in the form of a matrix vector multiplication routine, and in which the process of extracting matrix elements of A is costly. The proposed algorithm takes advantage of the fact that for many linear systems arising from signal or image processing applications, eigenvectors of A can be well represented by a small number of Fourier modes. Therefore, the construction of C can be carried out in the frequency domain by carefully choosing its eigenvalues so that the condition number of C{sup T} AC can be reduced significantly. We illustrate how to construct the spectrum of C in a way such that the smallest eigenvalues of C{sup T} AC overlaps with those of A extremely well while the largest eigenvalues of C{sup T} AC are smaller than those of A by several orders of magnitude. Numerical examples are provided to demonstrate the effectiveness of the preconditioner on accelerating the solution of linear systems arising from image reconstruction application.

  2. Toeplitz block circulant matrix optimized with particle swarm optimization for compressive imaging

    Science.gov (United States)

    Tao, Huifeng; Yin, Songfeng; Tang, Cong

    2016-10-01

    Compressive imaging is an imaging way based on the compressive sensing theory, which could achieve to capture the high resolution image through a small set of measurements. As the core of the compressive imaging, the design of the measurement matrix is sufficient to ensure that the image can be recovered from the measurements. Due to the fast computing capacity and the characteristic of easy hardware implementation, The Toeplitz block circulant matrix is proposed to realize the encoded samples. The measurement matrix is usually optimized for improving the image reconstruction quality. However, the existing optimization methods can destroy the matrix structure easily when applied to the Toeplitz block circulant matrix optimization process, and the deterministic iterative processes of them are inflexible, because of requiring the task optimized to need to satisfy some certain mathematical property. To overcome this problem, a novel method of optimizing the Toeplitz block circulant matrix based on the particle swarm optimization intelligent algorithm is proposed in this paper. The objective function is established by the way of approaching the target matrix that is the Gram matrix truncated by the Welch threshold. The optimized object is the vector composed by the free entries instead of the Gram matrix. The experimental results indicate that the Toeplitz block circulant measurement matrix can be optimized while preserving the matrix structure by our method, and result in the reconstruction quality improvement.

  3. Algorithms for Finding the Inverses of Factor Block Circulant Matrices

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, algorithms for finding the inverse of a factor block circulant matrix,a factor block retrocirculant matrix and partitioned matrix with factor block circulant blocks over the complex field are presented respectively. In addition, two algorithms for the inverse of a factor block circulant matrix over the quaternion division algebra are proposed.

  4. Sparse Non-negative Matrix Factor 2-D Deconvolution

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel N.

    2006-01-01

    We introduce the non-negative matrix factor 2-D deconvolution (NMF2D) model, which decomposes a matrix into a 2-dimensional convolution of two factor matrices. This model is an extension of the non-negative matrix factor deconvolution (NMFD) recently introduced by Smaragdis (2004). We derive...... and prove the convergence of two algorithms for NMF2D based on minimizing the squared error and the Kullback-Leibler divergence respectively. Next, we introduce a sparse non-negative matrix factor 2-D deconvolution model that gives easy interpretable decompositions and devise two algorithms for computing...... this form of factorization. The developed algorithms have been used for source separation and music transcription....

  5. AN IMPROVED FAST BLIND DECONVOLUTION ALGORITHM BASED ON DECORRELATION AND BLOCK MATRIX

    Institute of Scientific and Technical Information of China (English)

    Yang Jun'an; He Xuefan; Tan Ying

    2008-01-01

    In order to alleviate the shortcomings of most blind deconvolution algorithms,this paper proposes an improved fast algorithm for blind deconvolution based on decorrelation technique and broadband block matrix. Althougth the original algorithm can overcome the shortcomings of current blind deconvolution algorithms,it has a constraint that the number of the source signals must be less than that of the channels. The improved algorithm deletes this constraint by using decorrelation technique. Besides,the improved algorithm raises the separation speed in terms of improving the computing methods of the output signal matrix. Simulation results demonstrate the validation and fast separation of the improved algorithm.

  6. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding to in...... to individual instruments. Based on this factorization we separate the instruments using spectrogram masking. The proposed algorithm has applications in computational auditory scene analysis, music information retrieval, and automatic music transcription.......We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  7. High Resolution Turntable Radar Imaging via Two Dimensional Deconvolution with Matrix Completion

    Science.gov (United States)

    Lu, Xinfei; Xia, Jie; Yin, Zhiping; Chen, Weidong

    2017-01-01

    Resolution is the bottleneck for the application of radar imaging, which is limited by the bandwidth for the range dimension and synthetic aperture for the cross-range dimension. The demand for high azimuth resolution inevitably results in a large amount of cross-range samplings, which always need a large number of transmit-receive channels or a long observation time. Compressive sensing (CS)-based methods could be used to reduce the samples, but suffer from the difficulty of designing the measurement matrix, and they are not robust enough in practical application. In this paper, based on the two-dimensional (2D) convolution model of the echo after matched filter (MF), we propose a novel 2D deconvolution algorithm for turntable radar to improve the radar imaging resolution. Additionally, in order to reduce the cross-range samples, we introduce a new matrix completion (MC) algorithm based on the hyperbolic tangent constraint to improve the performance of MC with undersampled data. Besides, we present a new way of echo matrix reconstruction for the situation that only partial cross-range data are observed and some columns of the echo matrix are missing. The new matrix has a better low rank property and needs just one operation of MC for all of the missing elements compared to the existing ways. Numerical simulations and experiments are carried out to demonstrate the effectiveness of the proposed method. PMID:28282904

  8. QUASI-SYSTEMATIC BLOCK-CIRCULANT LDPC CODES

    Institute of Scientific and Technical Information of China (English)

    Xu Ying; Wei Guo

    2008-01-01

    A class of Quasi-Systematic Block-Circulant Low-Density Parity-Check (QSBC-LDPC) codes is proposed. Block-circulant LDPC codes have been studied a lot recently,because the simple structures of their parity-check matrices are very helpful to reduce the implementation complexities.QSBC-LDPC codes are special block-circulant LDPC codes with quasi-systematic parity-check ma-trices. The memories for encoders of QSBC-LDPC codes are limited,and the encoding process can be carried out in a simple recursive way with low complexities. Researches show that the QSBC-LDPC codes can provide remarkable performances with low encoding complexities.

  9. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  10. Nonlinear stochastic regularization to characterize tissue residue function in bolus-tracking MRI: assessment and comparison with SVD, block-circulant SVD, and Tikhonov.

    Science.gov (United States)

    Zanderigo, Francesca; Bertoldo, Alessandra; Pillonetto, Gianluigi; Cobelli Ast, Claudio

    2009-05-01

    An accurate characterization of tissue residue function R(t) in bolus-tracking magnetic resonance imaging is of crucial importance to quantify cerebral hemodynamics. R(t) estimation requires to solve a deconvolution problem. The most popular deconvolution method is singular value decomposition (SVD). However, SVD is known to bear some limitations, e.g., R(t) profiles exhibit nonphysiological oscillations and take on negative values. In addition, SVD estimates are biased in presence of bolus delay and dispersion. Recently, other deconvolution methods have been proposed, in particular block-circulant SVD (cSVD) and Tikhonov regularization (TIKH). Here we propose a new method based on nonlinear stochastic regularization (NSR). NSR is tested on simulated data and compared with SVD, cSVD, and TIKH in presence and absence of bolus dispersion. A clinical case in one patient has also been considered. NSR is shown to perform better than SVD, cSVD, and TIKH in reconstructing both the peak and the residue function, in particular when bolus dispersion is considered. In addition, differently from SVD, cSVD, and TIKH, NSR always provides positive and smooth R(t).

  11. Deconvolution and Regularization with Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2002-01-01

    of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show...

  12. An Efficient Algorithm for Maximum-Entropy Extension of Block-Circulant Covariance Matrices

    CERN Document Server

    Carli, Francesca P; Pavon, Michele; Picci, Giorgio

    2011-01-01

    This paper deals with maximum entropy completion of partially specified block-circulant matrices. Since positive definite symmetric circulants happen to be covariance matrices of stationary periodic processes, in particular of stationary reciprocal processes, this problem has applications in signal processing, in particular to image modeling. Maximum entropy completion is strictly related to maximum likelihood estimation subject to certain conditional independence constraints. The maximum entropy completion problem for block-circulant matrices is a nonlinear problem which has recently been solved by the authors, although leaving open the problem of an efficient computation of the solution. The main contribution of this paper is to provide an efficient algorithm for computing the solution. Simulation shows that our iterative scheme outperforms various existing approaches, especially for large dimensional problems. A necessary and sufficient condition for the existence of a positive definite circulant completio...

  13. Deconvolution and Regularization with Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2002-01-01

    By deconvolution we mean the solution of a linear first-kind integral equation with a convolution-type kernel, i.e., a kernel that depends only on the difference between the two independent variables. Deconvolution problems are special cases of linear first-kind Fredholm integral equations, whose...... how Toeplitz matrix-vector products are computed by means of FFT, being useful in iterative methods. We also introduce the Kronecker product and show how it is used in the discretization and solution of 2-D deconvolution problems whose variables separate....

  14. Block circulant and block Toeplitz approximants of a class of spatially distributed systems-An LQR perspective

    NARCIS (Netherlands)

    Iftime, Orest V.

    2012-01-01

    In this paper block circulant and block Toeplitz long strings of MIMO systems with finite length are compared with their corresponding infinite-dimensional spatially invariant systems. The focus is on the convergence of the sequence of solutions to the control Riccati equations and the convergence o

  15. Deconvolution of Lorentzian broadened spectra. Pt. 1. Direct deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Nikolov, S.; Kantchev, K.

    1987-04-15

    A method is discussed of deconvolution of Lorentzian broadened experimental spectra directly in the ''time'' domain, that is, in the domain of the independent spectroscopic variable. The method consist in a numerical convolution of the spectrrum with a deconvoluting function which is calculated in conformity with a theoretical analysis of the sampled form of the input and output spectra and their Fourier transforms. An almost complete elimination of the systematic distortions and complete deconvolution degree are achieved. The restrictions imposed by the noise enhancement are estimated.

  16. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  17. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  18. Quantitative deconvolution microscopy.

    Science.gov (United States)

    Goodwin, Paul C

    2014-01-01

    The light microscope is an essential tool for the study of cells, organelles, biomolecules, and subcellular dynamics. A paradox exists in microscopy whereby the higher the needed lateral resolution, the more the image is degraded by out-of-focus information. This creates a significant need to generate axial contrast whenever high lateral resolution is required. One strategy for generating contrast is to measure or model the optical properties of the microscope and to use that model to algorithmically reverse some of the consequences of high-resolution imaging. Deconvolution microscopy implements model-based methods to enable the full diffraction-limited resolution of the microscope to be exploited even in complex and living specimens. © 2014 Elsevier Inc. All rights reserved.

  19. Multi-frame partially saturated images blind deconvolution

    Science.gov (United States)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  20. Sharp recovery bounds for convex deconvolution, with applications

    CERN Document Server

    McCoy, Michael B

    2012-01-01

    Deconvolution refers to the challenge of identifying two structured signals given only the sum of the two signals and prior information about their structures. A standard example is the problem of separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis. Another familiar case is the problem of decomposing an observed matrix into a low-rank matrix plus a sparse matrix. This paper describes and analyzes a framework, based on convex optimization, for solving these deconvolution problems and many others. This work introduces a randomized signal model which ensures that the two structures are incoherent, i.e., generically oriented. For an observation from this model, the calculus of spherical integral geometry provides an exact formula that describes when the optimization problem will succeed (or fail) to deconvolve the two constituent signals with high probability. This approach identifies a summary statistic that reflects the complexity of a particu...

  1. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  2. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  3. Approximate Deconvolution Reduced Order Modeling

    CERN Document Server

    Xie, Xuping; Wang, Zhu; Iliescu, Traian

    2015-01-01

    This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})

  4. Deconvolution Method for TOFD Technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sun Heum; Kim, Sun Hyoung; Kong, Yong Hae [Soonchunhyang University, Asan (Korea, Republic of); Lee, Weon Heum [Acohlap, Bucheon (Korea, Republic of)

    1999-12-15

    Time of flight diffraction(TOFD) method is used in nondestructive tests of piping and pressure vessels because of its advantages over a pulse echo technique: its speed, objectivity, repeatability and its insensitivity to specimen surface conditions and discontinuity orientation. But it is the one of weak points in TOFD method that it has the dead zone in sub-surface resolution induced by lateral waves. We solved the dead-zone problem near the sub-surface by using the deconvolution method and the developed ultrasonic testing system showed high performance

  5. Wavelet-Fourier self-deconvolution

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Using a wavelet function as the filter function of Fourier self-deconvolution, a new me- thod of resolving overlapped peaks, wavelet-Fourier self-deconvolution, is founded. The properties of different wavelet deconvolution functions are studied. In addition, a cutoff value coefficient method of eliminating artificial peaks and wavelet method of removing shoulder peaks using the ratio of maximum peak to minimum peak is established. As a result, some problems in classical Fourier self-deconvolution are solved, such as the bad result of denoising, complicated processing, as well as usual appearance of artificial and shoulder peaks. Wavelet-Fourier self-deconvolution is applied to determination of multi-components in oscillographic chronopotentiometry. Experimental results show that the method has characteristics of simpler process and better effect of processing.

  6. Wavelet-Fourier self-deconvolution

    Institute of Scientific and Technical Information of China (English)

    郑建斌; 张红权; 高鸿

    2000-01-01

    Using a wavelet function as the filter function of Fourier self-deconvolution, a new method of resolving overlapped peaks, wavelet-Fourier self-deconvolution, is founded. The properties of different wavelet deconvolution functions are studied. In addition, a cutoff value coefficient method of eliminating artificial peaks and wavelet method of removing shoulder peaks using the ratio of maximum peak to minimum peak is established. As a result, some problems in classical Fourier self-deconvolution are solved, such as the bad result of denoising, complicated processing, as well as usual appearance of artificial and shoulder peaks. Wavelet-Fourier self-deconvolution is applied to determination of multi-components in oscillographic chronopotentiometry. Experimental results show that the method has characteristics of simpler process and better effect of processing.

  7. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  8. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.

    Science.gov (United States)

    Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam

    2016-01-20

    We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.

  9. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  10. (AJST) EULER DECONVOLUTION AND SPECTRAL ANALYSIS OF ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    ABSTRACT: Existing regional aeromagnetic data from the south-central Zimbabwe craton has ... on the geological units and structures for depth constraints on the geotectonic .... process of deconvolution has been demonstrated to be an.

  11. On the regularity of trust region-cg algorithm: With application to deconvolution problem

    Institute of Scientific and Technical Information of China (English)

    WANG; Yanfei(王彦飞)

    2003-01-01

    Deconvolution problem is a main topic in signal processing. Many practical applications are re-quired to solve deconvolution problems. An important example is image reconstruction. Usually, researcherslike to use regularization method to deal with this problem. But the cost of computation is high due to thefact that direct methods are used. This paper develops a trust region-cg method, a kind of iterative methodsto solve this kind of problem. The regularity of the method is proved. Based on the special structure of thediscrete matrix, FFT can be used for calculation. Hence combining trust region-cg method with FFT is suitablefor solving large scale problems in signal processing.

  12. Compressed Blind De-convolution

    CERN Document Server

    Saligrama, V

    2009-01-01

    Suppose the signal x is realized by driving a k-sparse signal u through an arbitrary unknown stable discrete-linear time invariant system H. These types of processes arise naturally in Reflection Seismology. In this paper we are interested in several problems: (a) Blind-Deconvolution: Can we recover both the filter $H$ and the sparse signal $u$ from noisy measurements? (b) Compressive Sensing: Is x compressible in the conventional sense of compressed sensing? Namely, can x, u and H be reconstructed from a sparse set of measurements. We develop novel L1 minimization methods to solve both cases and establish sufficient conditions for exact recovery for the case when the unknown system H is auto-regressive (i.e. all pole) of a known order. In the compressed sensing/sampling setting it turns out that both H and x can be reconstructed from O(k log(n)) measurements under certain technical conditions on the support structure of u. Our main idea is to pass x through a linear time invariant system G and collect O(k lo...

  13. Total Variation Deconvolution using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-07-01

    Full Text Available Deblurring is the inverse problem of restoring an image that has been blurred and possibly corrupted with noise. Deconvolution refers to the case where the blur to be removed is linear and shift-invariant so it may be expressed as a convolution of the image with a point spread function. Convolution corresponds in the Fourier domain to multiplication, and deconvolution is essentially Fourier division. The challenge is that since the multipliers are often small for high frequencies, direct division is unstable and plagued by noise present in the input image. Effective deconvolution requires a balance between frequency recovery and noise suppression. Total variation (TV regularization is a successful technique for achieving this balance in deblurring problems. It was originally developed for image denoising by Rudin, Osher, and Fatemi and then applied to deconvolution by Rudin and Osher. In this article, we discuss TV-regularized deconvolution with Gaussian noise and its efficient solution using the split Bregman algorithm of Goldstein and Osher. We show a straightforward extension for Laplace or Poisson noise and develop empirical estimates for the optimal value of the regularization parameter λ.

  14. Distributed fusion white noise deconvolution estimators

    Institute of Scientific and Technical Information of China (English)

    Xiaojun SUN; Zili DENG

    2009-01-01

    The white noise deconvolution or input white noise estimation problem has important applications in oil seismic exploration, communication and signal processing.By combining the Kalman filtering method with the modern time series analysis method, based on the autoregressive moving average (ARMA) innovation model, new distributed fusion white noise deconvolution estimators are presented by weighting local input white noise estimators for general multisensor systems with different local dynamic models and correlated noises. The new estimators can handle input white noise fused filtering,prediction and smoothing problems, and are applicable to systems with colored measurement noise. Their accuracy is higher than that of local white noise deconvolution estimators. To compute the optimal weights, the new formula for local estimation error cross-covariances is given. A Monte Carlo simulation for the system with Bemoulli-Gaussian input white noise shows their effec-tiveness and performance.

  15. Multifunction nonlinear signal processor - Deconvolution and correlation

    Science.gov (United States)

    Javidi, Bahram; Horner, Joseph L.

    1989-08-01

    A multifuncional nonlinear optical signal processor is described that allows different types of operations, such as image deconvolution and nonlinear correlation. In this technique, the joint power spectrum of the input signal is thresholded with varying nonlinearity to produce different specific operations. In image deconvolution, the joint power spectrum is modified and hard-clip thresholded to remove the amplitude distortion effects and to restore the correct phase of the original image. In optical correlation, the Fourier transform interference intensity is thresholded to provide higher correlation peak intensity and a better-defined correlation spot. Various types of correlation signals can be produced simply by varying the severity of the nonlinearity, without the need for synthesis of specific matched filter. An analysis of the nonlinear processor for image deconvolution is presented.

  16. Hopfield Neural Network deconvolution for weak lensing measurement

    CERN Document Server

    Nurbaeva, Guldariya; Courbin, Frederic; Meylan, Georges

    2014-01-01

    Weak gravitational lensing has the potential to place tight constraints on the equation of the state of dark energy. However, this will only be possible if shear measurement methods can reach the required level of accuracy. We present a new method to measure the ellipticity of galaxies used in weak lensing surveys. The method makes use of direct deconvolution of the data by the total Point Spread Function (PSF). We adopt a linear algebra formalism that represents the PSF as a Toeplitz matrix. This allows us to solve the convolution equation by applying the Hopfield Neural Network iterative scheme. The ellipticity of galaxies in the deconvolved images are then measured using second order moments of the autocorrelation function of the images. To our knowledge, it is the first time full image deconvolution is used to measure weak lensing shear. We apply our method to the simulated weak lensing data proposed in the GREAT10 challenge and obtain a quality factor of Q=87. This result is obtained after applying image...

  17. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...... optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion...

  18. Natural Gradient Approach to Multichannel Blind Deconvolution

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper we study the geometrical structures of FIR filters and their application to multichannel blind deconvolution.First we introduce a Lie group structure and a Riemannian structure on the manifolds of the FIR filters.Then we derive the natural gradients on the manifolds using the isometry of the Riemannian metric.Using the natural gradient,we present a novel learning algorithm for blind deconvolution based on the minimization of mutual information.Some properties of the learning algorithm,such as equivariance and stability are also studied.Finally,the simulations are given to illustrate the effectiveness and validity of the proposed algorithm.

  19. Passive seismic interferometry by multidimensional deconvolution

    NARCIS (Netherlands)

    Wapenaar, C.P.A.; Van der Neut, J.R.; Ruigrok, E.N.

    2008-01-01

    We introduce seismic interferometry of passive data by multidimensional deconvolution (MDD) as an alternative to the crosscorrelation method. Interferometry by MDD has the potential to correct for the effects of source irregularity, assuming the first arrival can be separated from the full response.

  20. Windprofiler optimization using digital deconvolution procedures

    Science.gov (United States)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  1. Nonstationary sparsity-constrained seismic deconvolution

    Science.gov (United States)

    Sun, Xue-Kai; Sam, Zandong Sun; Xie, Hui-Wen

    2014-12-01

    The Robinson convolution model is mainly restricted by three inappropriate assumptions, i.e., statistically white reflectivity, minimum-phase wavelet, and stationarity. Modern reflectivity inversion methods (e.g., sparsity-constrained deconvolution) generally attempt to suppress the problems associated with the first two assumptions but often ignore that seismic traces are nonstationary signals, which undermines the basic assumption of unchanging wavelet in reflectivity inversion. Through tests on reflectivity series, we confirm the effects of nonstationarity on reflectivity estimation and the loss of significant information, especially in deep layers. To overcome the problems caused by nonstationarity, we propose a nonstationary convolutional model, and then use the attenuation curve in log spectra to detect and correct the influences of nonstationarity. We use Gabor deconvolution to handle nonstationarity and sparsity-constrained deconvolution to separating reflectivity and wavelet. The combination of the two deconvolution methods effectively handles nonstationarity and greatly reduces the problems associated with the unreasonable assumptions regarding reflectivity and wavelet. Using marine seismic data, we show that correcting nonstationarity helps recover subtle reflectivity information and enhances the characterization of details with respect to the geological record.

  2. Deconvolution of Lorentzian broadened spectra. Pt. 2. Stepped deconvolution and smooting filtration

    Energy Technology Data Exchange (ETDEWEB)

    Kantchev, K.; Nikolov, S.

    1987-04-15

    A new method of numerical time domain deconvolution of Lorentzian broadened spectra is proposed. The new algorithm consist in convolution with a deconvoluting function accomplished by a preset step, so that an undersampling of the input spectra is performed. The main advantage of this method is the considerable reduction of the noise enhancement. A theoretical analysis of the possibilities, the restrictions and the errors is done. The results are confirmed by test investigations and by experimental examples.

  3. Parametric study on sequential deconvolution for force identification

    Science.gov (United States)

    Lai, Tao; Yi, Ting-Hua; Li, Hong-Nan

    2016-09-01

    The force identification can be mathematically viewed as the mapping from the observed responses to external forces through a matrix filled with system Markov parameters, which makes it difficult or even impossible for long time duration. A potentially efficient solution is to sequentially perform the identification processing. This paper presents a parametric study on the sequential deconvolution input reconstruction (SDR) method, which was proposed by Bernal. The behavior of the SDR method due to the effects of window parameters, noise levels and sensor configurations is investigated. In addition, a new normalized standard deviation of the reconstruction error over time is derived to cover the effect of only independent noise entries. The sinusoidal and band-limited white noise excitations are tested to be identified with good accuracy even with 10% noise. The simulation results yield various conclusions that may be helpful to engineering practitioners.

  4. Algorithmic Optimisations for Iterative Deconvolution Methods

    OpenAIRE

    Welk, Martin; Erler, Martin

    2013-01-01

    We investigate possibilities to speed up iterative algorithms for non-blind image deconvolution. We focus on algorithms in which convolution with the point-spread function to be deconvolved is used in each iteration, and aim at accelerating these convolution operations as they are typically the most expensive part of the computation. We follow two approaches: First, for some practically important specific point-spread functions, algorithmically efficient sliding window or list processing tech...

  5. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  6. Fast and Stable Signal Deconvolution via Compressible State-Space Models.

    Science.gov (United States)

    Kazemipour, Abbas; Liu, Ji; Solarana, Krystyna; Nagode, Daniel; Kanold, Patrick; Wu, Min; Babadi, Behtash

    2017-04-13

    Common biological measurements are in the form of noisy convolutions of signals of interest with possibly unknown and transient blurring kernels. Examples include EEG and calcium imaging data. Thus, signal deconvolution of these measurements is crucial in understanding the underlying biological processes. The objective of this paper is to develop fast and stable solutions for signal deconvolution from noisy, blurred and undersampled data, where the signals are in the form of discrete events distributed in time and space. We introduce compressible state-space models as a framework to model and estimate such discrete events. These state-space models admit abrupt changes in the states and have a convergent transition matrix, and are coupled with compressive linear measurements. We consider a dynamic compressive sensing optimization problem and develop a fast solution, using two nested Expectation Maximization algorithms, to jointly estimate the states as well as their transition matrices. Under suitable sparsity assumptions on the dynamics, we prove optimal stability guarantees for the recovery of the states and present a method for the identification of the underlying discrete events with precise confidence bounds. We present simulation studies as well as application to calcium deconvolution and sleep spindle detection, which verify our theoretical results and show significant improvement over existing techniques. Our results show that by explicitly modeling the dynamics of the underlying signals, it is possible to construct signal deconvolution solutions that are scalable, statistically robust, and achieve high temporal resolution. Our proposed methodology provides a framework for modeling and deconvolution of noisy, blurred, and undersampled measurements in a fast and stable fashion, with potential application to a wide range of biological data.

  7. Iterative Deconvolution of PEA Measurements for Enhancing the Spatial Resolution of Charge Profile in Space Polymers

    Directory of Open Access Journals (Sweden)

    Mohamad Arnaout

    2016-01-01

    Full Text Available This work aims to improve the PEA calibration technique through defining a well-conditioned transfer matrix. To this end, a numerical electroacoustic model that allows determining the output voltage of the piezoelectric sensor and the acoustic pressure is developed with the software COMSOL®. The proposed method recovers the charge distribution within the sample using an iterative deconvolution method that uses the transfer matrix obtained with the new calibration technique. The obtained results on theoretical and experimental signals show an improvement in the spatial resolution compared with the standard method usually used.

  8. ITERATIVE MULTICHANNEL BLIND DECONVOLUTION METHOD FOR TEMPORALLY COLORED SOURCES

    Institute of Scientific and Technical Information of China (English)

    Zhang Mingjian; Wei Gang

    2004-01-01

    An iterative separation approach, i.e. source signals are extracted and removed one by one, is proposed for multichannel blind deconvolution of colored signals. Each source signal is extracted in two stages: a filtered version of the source signal is first obtained by solving the generalized eigenvalue problem, which is then followed by a single channel blind deconvolution based on ensemble learning. Simulation demonstrates the capability of the approach to perform efficient mutichannel blind deconvolution.

  9. Refinement of Fourier Coefficients from the Stokes Deconvoluted Profile

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer-aided experimental technique was used to study the Stokes deconvolution of X-ray diffraction profile.Considerable difference can be found between the Fourier coefficients obtained from the deconvolutions of singlet and doublet experimental profiles. Nevertheless, the resultant physical profiles corresponding to singlet and doublet profiles are identical. An approach is proposed to refine the Fourier coefficients, and the refined Fourier coefficients coincide well with that obtained from the deconvolution of singlet experimental profile.

  10. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  11. Free deconvolution for signal processing applications

    CERN Document Server

    Ryan, O

    2007-01-01

    Situations in many fields of research, such as digital communications, nuclear physics and mathematical finance, can be modelled with random matrices. When the matrices get large, free probability theory is an invaluable tool for describing the asymptotic behaviour of many systems. It will be shown how free probability can be used to aid in source detection for certain systems. Sample covariance matrices for systems with noise are the starting point in our source detection problem. Multiplicative free deconvolution is shown to be a method which can aid in expressing limit eigenvalue distributions for sample covariance matrices, and to simplify estimators for eigenvalue distributions of covariance matrices.

  12. Blind iterative deconvolution of binary star images

    CERN Document Server

    Saha, S K

    1997-01-01

    The technique of Blind Iterative De-convolution (BID) was used to remove the atmospherically induced point spread function (PSF) from short exposure images of two binary stars, HR 5138 and HR 5747 obtained at the cassegrain focus of the 2.34 meter Vainu Bappu Telescope(VBT), situated at Vainu Bappu Observatory (VBO), Kavalur. The position angles and separations of the binary components were seen to be consistent with results of the auto-correlation technique, while the Fourier phases of the reconstructed images were consistent with published observations of the binary orbits.

  13. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  14. Role of positivity in blind deconvolution (Conference Presentation)

    Science.gov (United States)

    Pal, Piya; Qiao, Heng

    2017-05-01

    Blind deconvolution is an important problem arising in many engineering and scientific applications, ranging from imaging, communication to computer vision and machine learning. Classical techniques to solve this highly ill posed problem exploit statistical priors on the signals of interest. In recent times, there has been a renewed interest in deterministic approaches for blind deconvolution, whereby, using the novel idea of "lifting", the non-convex blind deconvolution problem can be cast as a semidefinite program. Using suitable subspace assumptions on the unknown signals, precise theoretical guarantees can be derived on the number of measurements needed to perform blind deconvolution. In this paper, we will address the problem of positive sparse blind deconvolution, where the signals of interest exhibit positivity (alongside sparsity) either naturally, or in appropriate transform domains. Important applications of positive blind deconvolution include image deconvolution and positive spike detection. We will show that positivity is a powerful constraint that can be exploited to cast the blind deconvolution problem in terms of a simple linear program that can be theoretically analyzed. We will explore the questions of uniqueness and identifiability, and develop conditions under which the linear program reveals the true positive sparse solution. Numerical results will demonstrate the superior performance of the proposed approach.

  15. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  16. Deconvolution methods for structured illumination microscopy.

    Science.gov (United States)

    Chakrova, Nadya; Rieger, Bernd; Stallinga, Sjoerd

    2016-07-01

    We compare two recently developed multiple-frame deconvolution approaches for the reconstruction of structured illumination microscopy (SIM) data: the pattern-illuminated Fourier ptychography algorithm (piFP) and the joint Richardson-Lucy deconvolution (jRL). The quality of the images reconstructed by these methods is compared in terms of the achieved resolution improvement, noise enhancement, and inherent artifacts. Furthermore, we study the issue of object-dependent resolution improvement by considering the modulation transfer functions derived from different types of objects. The performance of the considered methods is tested in experiments and benchmarked with a commercial SIM microscope. We find that the piFP method resolves periodic and isolated structures equally well, whereas the jRL method provides significantly higher resolution for isolated objects compared to periodic ones. Images reconstructed by the piFP and jRL algorithms are comparable to the images reconstructed using the generalized Wiener filter applied in most commercial SIM microscopes. An advantage of the discussed algorithms is that they allow the reconstruction of SIM images acquired under different types of illumination, such as multi-spot or random illumination.

  17. Improved Gabor Deconvolution and Its Extended Applications

    Directory of Open Access Journals (Sweden)

    Sun Xuekai

    2016-02-01

    Full Text Available In log time-frequency spectra, the nonstationary convolution model is a linear equation and thus we improved the Gabor deconvolution by employing a log hyperbolic smoothing scheme which can be implemented as an iteration process. Numerical tests and practical applications demonstrate that improved Gabor deconvolution can further broaden frequency bandwidth with less computational expenses than the ordinary method. Moreover, we attempt to enlarge this method’s application value by addressing nonstationary and evaluating Q values. In fact, energy relationship of each hyperbolic bin (i.e., attenuation curve can be taken as a quantitative indicator in balancing nonstationarity and conditioning seismic traces to the assumption of unchanging wavelet, which resultantly reveals more useful information for constrained reflectivity inversion. Meanwhile, a statistical method on Q-value estimation is also proposed by utilizing this linear model’s gradient. In practice, not only estimations well agree with geologic settings, but also applications on Q-compensation migration are favorable in characterizing deep geologic structures, such as the pinch-out boundary and water channel.

  18. Compressive Deconvolution in Medical Ultrasound Imaging.

    Science.gov (United States)

    Chen, Zhouye; Basarab, Adrian; Kouamé, Denis

    2016-03-01

    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.

  19. Reliability of multiresolution deconvolution for improving depth resolution in SIMS analysis

    Science.gov (United States)

    Boulakroune, M.'Hamed

    2016-11-01

    This paper deals the effectiveness and reliability of multiresolution deconvolution algorithm for recovery Secondary Ions Mass Spectrometry, SIMS, profiles altered by the measurement. This new algorithm is characterized as a regularized wavelet transform. It combines ideas from Tikhonov Miller regularization, wavelet analysis and deconvolution algorithms in order to benefit from the advantages of each. The SIMS profiles were obtained by analysis of two structures of boron in a silicon matrix using a Cameca-Ims6f instrument at oblique incidence. The first structure is large consisting of two distant wide boxes and the second one is thin structure containing ten delta-layers in which the deconvolution by zone was applied. It is shown that this new multiresolution algorithm gives best results. In particular, local application of the regularization parameter of blurred and estimated solutions at each resolution level provided to smoothed signals without creating artifacts related to noise content in the profile. This led to a significant improvement in the depth resolution and peaks' maximums.

  20. Study of weighted space deconvolution algorithm in computer controlled optical surfacing formation

    Institute of Scientific and Technical Information of China (English)

    Hongyu Li; Wei Zhang; Guoyu Yu

    2009-01-01

    Theoretical and experimental research on the deconvolution algorithm of dwell time in the technology of computer controlled optical surfacing (CCOS) formation is made to get an ultra-smooth surface of space optical element. Based on the Preston equation, the convolution model of CCOS is deduced. Considering the morbidity problem of deconvolution algorithm and the actual situation of CCOS technology, the weighting spatial deconvolution algorithm is presented based on the non-periodic matrix model, which avoids solving morbidity resulting from the noise induced by measurement error. The discrete convolution equation is solved using conjugate gradient iterative method and the workload of iterative calculation in spatial domain is reduced effectively. Considering the edge effect of convolution algorithm, the method adopts a marginal factor to control the edge precision and attains a good effect. The simulated processing test shows that the convergence ratio of processed surface shape error reaches 80%. This algorithm is further verified through an experiment on a numerical control bonnet polishing machine, and an ultra-smooth glass surface with the root-mean-square (RMS) error of 0.0088 μm is achieved. The simulation and experimental results indicate that this algorithm is steady, convergent, and precise, and it can satisfy the solving requirement of actual dwell time.

  1. Iterative smoothing and deconvolution of one- and two-dimensional elemental distribution data

    Science.gov (United States)

    Coote, G. E.

    1997-07-01

    The resolution of the data from many instruments can be improved, or the rate of data collection can be increased for the same final resolution, by applying to the data reliable algorithms for smoothing and deconvolution. Iterative methods which were formerly impractical can easily be applied on a small computer. An ingenious linear algorithm for deconvolution of one-dimensional data (van Cittert, 1931) gave much better results when Jansson (1963) introduced a relaxation function which ensured the results remained positive. Gold (1964) derived by a matrix approach a nonlinear algorithm which used a different method of comparison, but Xu et al. showed 30 years later that it is a special van Cittert algorithm with a variable relaxation function. Tests of Gold's method show that it is reliable and much faster than Jansson's algorithm, converging in 20 iterations or fewer. If a microprobe beam spot is to a good approximation square or rectangular a 2-D image can be smoothed or deconvolved in the X and Y directions independently, and the Gold algorithm has proved suitable for the deconvolution stage. Almost all smoothing methods will broaden narrow peaks, but an exception is the linear iterative method of Morrison (1962), which reduces any structure narrower than the resolution function. The negative feedback step used in the deconvolution algorithms is not possible in a smoothing algorithm. The method suffers from a halting problem, since it smoothes during early iterations but eventually reproduces the original data. This can be prevented by introducing a relaxation function which is unity for the first iteration but decreases rapidly with succeeding iterations.

  2. Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C

    2013-05-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain.

  3. A Deep Generative Deconvolutional Image Model

    Energy Technology Data Exchange (ETDEWEB)

    Pu, Yunchen; Yuan, Xin; Stevens, Andrew J.; Li, Chunyuan; Carin, Lawrence

    2016-05-09

    A deep generative model is developed for representation and analysis of images, based on a hierarchical convolutional dictionary-learning framework. Stochastic unpooling is employed to link consecutive layers in the model, yielding top-down image generation. A Bayesian support vector machine is linked to the top-layer features, yielding max-margin discrimination. Deep deconvolutional inference is employed when testing, to infer the latent features, and the top-layer features are connected with the max-margin classifier for discrimination tasks. The model is efficiently trained using a Monte Carlo expectation-maximization (MCEM) algorithm; the algorithm is implemented on graphical processor units (GPU) to enable large-scale learning, and fast testing. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.

  4. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Wang

    2011-03-01

    Full Text Available Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors in variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

  5. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  6. Blind Deconvolution for Ultrasound Sequences Using a Noninverse Greedy Algorithm

    Directory of Open Access Journals (Sweden)

    Liviu-Teodor Chira

    2013-01-01

    Full Text Available The blind deconvolution of ultrasound sequences in medical ultrasound technique is still a major problem despite the efforts made. This paper presents a blind noninverse deconvolution algorithm to eliminate the blurring effect, using the envelope of the acquired radio-frequency sequences and a priori Laplacian distribution for deconvolved signal. The algorithm is executed in two steps. Firstly, the point spread function is automatically estimated from the measured data. Secondly, the data are reconstructed in a nonblind way using proposed algorithm. The algorithm is a nonlinear blind deconvolution which works as a greedy algorithm. The results on simulated signals and real images are compared with different state of the art methods deconvolution. Our method shows good results for scatters detection, speckle noise suppression, and execution time.

  7. Deconvolution of ultrafast kinetic data with inverse filtering

    Energy Technology Data Exchange (ETDEWEB)

    Banyasz, Akos [Department of Physical Chemistry, Eoetvoes University, P.O. Box 32, H-1518 Budapest 112 (Hungary); Research Institute for Solid State Physics and Optics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Matyus, Edit [Department of Physical Chemistry, Eoetvoes University, P.O. Box 32, H-1518 Budapest 112 (Hungary); Keszei, Erno [Department of Physical Chemistry, Eoetvoes University, P.O. Box 32, H-1518 Budapest 112 (Hungary)]. E-mail: keszei@chem.elte.hu

    2005-02-01

    Due to limitations of pulse widths in ultrafast laser or electron pulse kinetic measurements, in the case of subpicosecond characteristic times of the studied reactions, deconvolution with the pulses always distorts the kinetic signal. Here, we describe inverse filtering based on Fourier transformations to deconvolve measured ultrafast kinetic data without evoking a particular kinetic mechanism. Deconvolution methods using additional Wiener filtering or two-parameter regularization are found to give reliable results for simulated as well as experimental data.

  8. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  9. Imaging P-to-S conversions with broad-band seismic arrays using multichannel time-domain deconvolution

    Science.gov (United States)

    Neal, Scott L.; Pavlis, Gary L.

    2001-09-01

    This paper describes a series of innovations in the problem of deconvolving forward scattered P-to-S conversions. We introduce a theoretical foundation for a recently developed multichannel stacking technique and show that this process is equivalent to a spatial convolution of the incident wavefield with the discretely sampled set of station locations. We then show that deconvolution of the stacked data is a form of multichannel deconvolution with a spatially variable set of weights equal to those used in stacking. This result is independent of the particular deconvolution method that is used. A second innovation focuses on the design of deconvolution operators that correctly account for the loss of high frequency components of P-to-S conversions caused by differential attenuation of P and S waves. We describe two complimentary methods to implement this: (1) through the use of a regularization operator that penalizes high frequencies and increases with P-to-S lag time, or (2) through the use of a quelling operator. For the latter, we introduce the use of a t* operator that is applied to the deconvolution matrix operator. The t* operator progressively filters the vertical component seismogram with increasing P-to-S lag time and is based on an earth model of body wave attenuation. Both techniques produce progressively smoother solutions for increasing P-to-S lag times. The quelling approach has two advantages: (1) it is based on the physical principle that this solution is designed to address, and (2) it provides a unified inversion framework for the combination of stacking and deconvolution. This combination may be interpreted as a three-dimensional quelling (smoothing) operator that is applied to the full wavefield to stabilize the inversion. Application of this procedure to synthetic data shows that while the addition of a time dependent component to the deconvolution tends to decrease the frequency content of the solution, the amplitude of background ringing is

  10. A Framework for Fast Image Deconvolution With Incomplete Observations.

    Science.gov (United States)

    Simoes, Miguel; Almeida, Luis B; Bioucas-Dias, Jose; Chanussot, Jocelyn

    2016-11-01

    In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a "partial" ADMM, in which not all variables are dualized. We report experimental comparisons with other primal-dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries.

  11. Full cycle rapid scan EPR deconvolution algorithm

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  12. Simultaneous deghosting and wavelet estimation via blind deconvolution

    Science.gov (United States)

    Haghshenas Lari, Hojjat; Gholami, Ali

    2016-12-01

    Seismic deconvolution and deghosting are common methods for increasing the temporal resolution of marine seismic data. In this paper, we employ the advantages of multichannel blind deconvolution technique to obtain a deghosting algorithm for source and receiver side ghost elimination. The advantage of the proposed algorithm is two fold: first, it uses the correlation between the information contained in neighboring traces to stabilize the deghosting process while deconvolving the data in a blind fashion. Second, an estimation of the source wavelet is simultaneously provided by the inversion process. A fast algorithm is provided to solve the inverse problem by using the split Bregman iteration. Numerical results from simulated and field seismic data confirm the effectiveness of the proposed algorithm for automatic deghosting and deconvolution of marine data while being able to recover complex mixed-phase source wavelets.

  13. The application of compressive sampling to radio astronomy I: Deconvolution

    CERN Document Server

    Li, Feng; de Hoog, Frank

    2011-01-01

    Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.

  14. Wavelet-based deconvolution of ultrasonic signals in nondestructive evaluation

    Institute of Scientific and Technical Information of China (English)

    HERRERA Roberto Henry; OROZCO Rubén; RODRIGUEZ Manuel

    2006-01-01

    In this paper, the inverse problem of reconstructing reflectivity function of a medium is examined within a blind deconvolution framework. The ultrasound pulse is estimated using higher-order statistics, and Wiener filter is used to obtain the ultrasonic reflectivity function through wavelet-based models. A new approach to the parameter estimation of the inverse filtering step is proposed in the nondestructive evaluation field, which is based on the theory of Fourier-Wavelet regularized deconvolution (ForWaRD). This new approach can be viewed as a solution to the open problem of adaptation of the ForWaRD framework to perform the convolution kernel estimation and deconvolution interdependently. The results indicate stable solutions of the estimated pulse and an improvement in the radio-frequency (RF) signal taking into account its signal-to-noise ratio (SNR) and axial resolution. Simulations and experiments showed that the proposed approach can provide robust and optimal estimates of the reflectivity function.

  15. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  16. Overview of marine controlled-source electromagnetic interferometry by multidimensional deconvolution

    NARCIS (Netherlands)

    Hunziker, J.W.; Slob, E.C.; Wapenaar, C.P.A.

    2014-01-01

    Interferometry by multidimensional deconvolution for marine Controlled-Source Electromagnetics can suppress the direct field and the airwave in order to increase the detectability of the reservoir. For monitoring, interferometry by multidimensional deconvolution can increase the source repeatability

  17. FTIR Analysis of Alkali Activated Slag and Fly Ash Using Deconvolution Techniques

    Science.gov (United States)

    Madavarapu, Sateesh Babu

    The studies on aluminosilicate materials to replace traditional construction materials such as ordinary Portland cement (OPC) to reduce the effects caused has been an important research area for the past decades. Many properties like strength have already been studied and the primary focus is to learn about the reaction mechanism and the effect of the parameters on the formed products. The aim of this research was to explore the structural changes and reaction product analysis of geopolymers (Slag & Fly Ash) using Fourier transform infrared spectroscopy (FTIR) and deconvolution techniques. Spectroscopic techniques give valuable information at a molecular level but not all methods are economic and simple. To understand the mechanisms of alkali activated aluminosilicate materials, attenuated total reflectance (ATR) FTIR has been used where the effect of the parameters on the reaction products have been analyzed. To analyze complex systems like geopolymers using FTIR, deconvolution techniques help to obtain the properties of a particular peak attributed to a certain molecular vibration. Time and temperature dependent analysis were done on slag pastes to understand the polymerization of reactive silica in the system with time and temperature variance. For time dependent analysis slag has been activated with sodium and potassium silicates using two different `n'values and three different silica modulus [Ms- (SiO2 /M2 O)] values. The temperature dependent analysis was done by curing the samples at 60°C and 80°C. Similarly fly ash has been studied by activating with alkali hydroxides and alkali silicates. Under the same curing conditions the fly ash samples were evaluated to analyze the effects of added silicates for alkali activation. The peak shifts in the FTIR explains the changes in the structural nature of the matrix and can be identified using the deconvolution technique. A strong correlation is found between the concentrations of silicate monomer in the

  18. Gauss-Newton based kurtosis blind deconvolution of spectroscopic data

    Institute of Scientific and Technical Information of China (English)

    Jinghe Yuan; Ziqiang Hu

    2006-01-01

    @@ The spectroscopic data recorded by dispersion spectrophotometer are usually degraded by the response function of the instrument. To improve the resolving power, double or triple cascade spectrophotometer and narrow slits have been employed, but the total flux of the radiation decreases accordingly, resulting in a lower signal-to-noise ratio (SNR) and a longer measuring time. However, the spectral resolution can be improved by mathematically removing the effect of the instrument response function. Based on the ShalviWeinstein criterion, a Gauss-Newton based kurtosis blind deconvolution algorithm for spectroscopic data is proposed. Experiments with some real measured Raman spectroscopic data show that this algorithm has excellent deconvolution capability.

  19. Statistical mechanics approach to the sample deconvolution problem.

    Science.gov (United States)

    Riedel, N; Berg, J

    2013-04-01

    In a multicellular organism different cell types express a gene in different amounts. Samples from which gene expression levels can be measured typically contain a mixture of different cell types; the resulting measurements thus give only averages over the different cell types present. Based on fluctuations in the mixture proportions from sample to sample it is in principle possible to reconstruct the underlying expression levels of each cell type: to deconvolute the sample. We use a statistical mechanics approach to the problem of deconvoluting such partial concentrations from mixed samples, explore this approach using Markov chain Monte Carlo simulations, and give analytical results for when and how well samples can be unmixed.

  20. Spatially varying regularization of deconvolution in 3D microscopy.

    Science.gov (United States)

    Seo, J; Hwang, S; Lee, J-M; Park, H

    2014-08-01

    Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.

  1. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  2. An Improved Adaptive Deconvolution Algorithm for Single Image Deblurring

    Directory of Open Access Journals (Sweden)

    Hsin-Che Tsai

    2014-01-01

    Full Text Available One of the most common defects in digital photography is motion blur caused by camera shake. Shift-invariant motion blur can be modeled as a convolution of the true latent image and a point spread function (PSF with additive noise. The goal of image deconvolution is to reconstruct a latent image from a degraded image. However, ringing is inevitable artifacts arising in the deconvolution stage. To suppress undesirable artifacts, regularization based methods have been proposed using natural image priors to overcome the ill-posedness of deconvolution problem. When the estimated PSF is erroneous to some extent or the PSF size is large, conventional regularization to reduce ringing would lead to loss of image details. This paper focuses on the nonblind deconvolution by adaptive regularization which preserves image details, while suppressing ringing artifacts. The way is to control the regularization weight adaptively according to the image local characteristics. We adopt elaborated reference maps that indicate the edge strength so that textured and smooth regions can be distinguished. Then we impose an appropriate constraint on the optimization process. The experiments’ results on both synthesized and real images show that our method can restore latent image with much fewer ringing and favors the sharp edges.

  3. A novel deconvolution beamforming algorithm for virtual phased arrays

    DEFF Research Database (Denmark)

    Fernandez Comesana, Daniel; Fernandez Grande, Efren; Tiana Roig, Elisabet;

    2013-01-01

    traditionally obtained using large arrays can be emulated by applying beamforming algorithms to data acquired from only two sensors. This paper presents a novel beamforming algorithm which uses a deconvolution approach to strongly reduce the presence of side lobes. A series of synthetic noise sources...

  4. Iterative optical vector-matrix processors (survey of selected achievable operations)

    Science.gov (United States)

    Casasent, D.; Neuman, C.

    1981-01-01

    An iterative optical vector-matrix multiplier with a microprocessor-controlled feedback loop capable of performing a wealth of diverse operations was described. A survey and description of many of its operations demonstrates the versatility and flexibility of this class of optical processor and its use in diverse applications. General operations described include: linear difference and differential equations, linear algebraic equations, matrix equations, matrix inversion, nonlinear matrix equations, deconvolution and eigenvalue and eigenvector computations. Engineering applications being addressed for these different operations and for the IOP are: adaptive phased-array radar, time-dependent system modeling, deconvolution and optimal control.

  5. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    Energy Technology Data Exchange (ETDEWEB)

    Montiel, H. [Centro de Ciencias Aplicadas y Desarrollo Tecnologico, Universidad Nacional Autonoma de Mexico UP. O. Box 70-360, Coyoacan, C.P. 04510 (Mexico)]. E-mail: herlinda_m@yahoo.com; Alvarez, G. [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico UP. O. Box 70-360, Coyoacan, C.P. 04510 (Mexico); Departamento de ciencia de los Materiales U.P. Adolfo L. Mateos Edif. 9, Av. Instituto Politecnico Nacional S/N, 07738 DF (Mexico); Betancourt, I. [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico UP. O. Box 70-360, Coyoacan, C.P. 04510 (Mexico); Zamorano, R. [Escuela de Fisica y Matematicas, IPN U.P. Adolfo L. Mateos Edif. 9, Av. Instituto Politecnico Nacional S/N, 07738 DF (Mexico); Valenzuela, R. [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico UP. O. Box 70-360, Coyoacan, C.P. 04510 (Mexico)

    2006-10-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co{sub 66}Fe{sub 4}B{sub 12}Si{sub 13}Nb{sub 4}Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing.

  6. Application of Maximum Entropy Deconvolution to ${\\gamma}$-ray Skymaps

    CERN Document Server

    Raab, Susanne

    2015-01-01

    Skymaps measured with imaging atmospheric Cherenkov telescopes (IACTs) represent the real source distribution convolved with the point spread function of the observing instrument. Current IACTs have an angular resolution in the order of 0.1$^\\circ$ which is rather large for the study of morphological structures and for comparing the morphology in $\\gamma$-rays to measurements in other wavelengths where the instruments have better angular resolutions. Serendipitously it is possible to approximate the underlying true source distribution by applying a deconvolution algorithm to the observed skymap, thus effectively improving the instruments angular resolution. From the multitude of existing deconvolution algorithms several are already used in astronomy, but in the special case of $\\gamma$-ray astronomy most of these algorithms are challenged due to the high noise level within the measured data. One promising algorithm for the application to $\\gamma$-ray data is the Maximum Entropy Algorithm. The advantages of th...

  7. Kernel methods and minimum contrast estimators for empirical deconvolution

    CERN Document Server

    Delaigle, Aurore

    2010-01-01

    We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.

  8. A new information fusion white noise deconvolution estimator

    Institute of Scientific and Technical Information of China (English)

    Xiaojun SUN; Shigang WANG; Zili DENG

    2009-01-01

    The white noise deconvolution or input white noise estimation problem has important applications in oil seismic exploration, communication and signal processing. By the modem time series analysis method, based on the autoregressive moving average (ARMA) innovation model, a new information fusion white noise deconvolution estimator is presented for the general multisensor systems with different local dynamic models and correlated noises. It can handle the input white noise fused filtering, prediction and smoothing problems, and it is applicable to systems with colored measurement noises. It is locally optimal, and is globally suboptimal. The accuracy of the fuser is higher than that of each local white noise estimator. In order to compute the optimal weights, the formula computing the local estimation error cross-covariances is given. A Monte Carlo simulation example for the system with Bernoulli-Gaussian input white noise shows the effectiveness and performances.

  9. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan;

    1993-01-01

    the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase......An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  10. Homomorphic Deconvolution for MUAP Estimation From Surface EMG Signals.

    Science.gov (United States)

    Biagetti, Giorgio; Crippa, Paolo; Orcioni, Simone; Turchetti, Claudio

    2017-03-01

    This paper presents a technique for parametric model estimation of the motor unit action potential (MUAP) from the surface electromyography (sEMG) signal by using homomorphic deconvolution. The cepstrum-based deconvolution removes the effect of the stochastic impulse train, which originates the sEMG signal, from the power spectrum of sEMG signal itself. In this way, only information on MUAP shape and amplitude were maintained, and then, used to estimate the parameters of a time-domain model of the MUAP itself. In order to validate the effectiveness of this technique, sEMG signals recorded during several biceps curl exercises have been used for MUAP amplitude and time scale estimation. The parameters so extracted as functions of time were used to evaluate muscle fatigue showing a good agreement with previously published results.

  11. Deconvolution from wave front sensing using the frozen flow hypothesis.

    Science.gov (United States)

    Jefferies, Stuart M; Hart, Michael

    2011-01-31

    Deconvolution from wave front sensing (DWFS) is an image-reconstruction technique for compensating the image degradation due to atmospheric turbulence. DWFS requires the simultaneous recording of high cadence short-exposure images and wave-front sensor (WFS) data. A deconvolution algorithm is then used to estimate both the target object and the wave front phases from the images, subject to constraints imposed by the WFS data and a model of the optical system. Here we show that by capturing the inherent temporal correlations present in the consecutive wave fronts, using the frozen flow hypothesis (FFH) during the modeling, high-quality object estimates may be recovered in much worse conditions than when the correlations are ignored.

  12. SIFT: Spherical-deconvolution informed filtering of tractograms.

    Science.gov (United States)

    Smith, Robert E; Tournier, Jacques-Donald; Calamante, Fernando; Connelly, Alan

    2013-02-15

    Diffusion MRI allows the structural connectivity of the whole brain (the 'tractogram') to be estimated in vivo non-invasively using streamline tractography. The biological accuracy of these data sets is however limited by the inherent biases associated with the reconstruction method. Here we propose a method to retrospectively improve the accuracy of these reconstructions, by selectively filtering out streamlines from the tractogram in a manner that improves the fit between the streamline reconstruction and the underlying diffusion images. This filtering is guided by the results of spherical deconvolution of the diffusion signal, hence the acronym SIFT: spherical-deconvolution informed filtering of tractograms. Data sets processed by this algorithm show a marked reduction in known reconstruction biases, and improved biological plausibility. Emerging methods in diffusion MRI, particularly those that aim to characterise and compare the structural connectivity of the brain, should benefit from the improved accuracy of the reconstruction.

  13. Deconvolution Kalman filtering for force measurements of revolving wings

    Science.gov (United States)

    Vester, R.; Percin, M.; van Oudheusden, B.

    2016-09-01

    The applicability of a deconvolution Kalman filtering approach is assessed for the force measurements on a flat plate undergoing a revolving motion, as an alternative procedure to correct for test setup vibrations. The system identification process required for the correct implementation of the deconvolution Kalman filter is explained in detail. It is found that in the presence of a relatively complex forcing history, the DK filter is better suited to filter out structural test rig vibrations than conventional filtering techniques that are based on, for example, low-pass or moving-average filtering. The improvement is especially found in the characterization of the generated force peaks. Consequently, more reliable force data is obtained, which is vital to validate semi-empirical estimation models, but is also relevant to correlate identified flow phenomena to the force production.

  14. Least squares deconvolution of the stellar intensity and polarization spectra

    CERN Document Server

    Kochukhov, O; Piskunov, N

    2010-01-01

    Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...

  15. Reconstruction of the insulin secretion rate by Bayesian deconvolution

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    of the insulin secretion rate (ISR) can be done by solving a highly ill-posed deconvolution problem. We present a Bayesian methodology for the estimation of scaled densities of phase-type distributions via Markov chain Monte Carlo techniques, whereby closed form evaluation of ISR is possible. We demonstrate...... the methodology on simulated data concluding that the method seems as a promising alternative to existing methods where the ISR is considered as piecewise constant....

  16. Deconvolution from Wavefront Sensing Using Optimal Wavefront Estimators

    Science.gov (United States)

    1996-12-01

    874-1641. 2. Arfken , George. Mathematical Methods for Pyhsicists (Third Edition). San Diego: Academic Press, Inc., 1985. 3. Bate, Roger R., et al...expression takes the form i(x, y) = -JJ o(, 71)h(x - 6, y - (2) 12 Fortunately, Fourier analysis methods can greatly simplify the mathematics . The...1 1.1 The Problem: Imaging Through Atmospheric Turbulence . 1 1.2 Mitigation Methods . .. .. ... ... ... ... ... ... 2 1.3 Deconvolution

  17. Deconvolution techniques for characterizing indoor UWB wireless channel

    Institute of Scientific and Technical Information of China (English)

    Wang Yang; Zhang Naitong; Zhang Qinyu; Zhang Zhongzhao

    2008-01-01

    A deconvolution algorithm is proposed to account for the distortions of impulse shape introduced by propagation process.By finding the best correlation of the received waveform with the multiple templates,the number of multipath components is reduced as the result of eliminating the"phantom paths",and the captured energy increases.Moreover,it needs only a single reference measurement in real measurement environment(do not need the anechoic chamber),which by far simplifies the templates acquiring procedure.

  18. Interferometry by deconvolution of multicomponent multioffset GPR data

    OpenAIRE

    Slob, E.C.

    2009-01-01

    Interferometric techniques are now well known to retrieve data between two receivers by the cross correlation of the data recorded by these receivers. Cross-correlation methods for interferometry rely mostly on the assumption that the medium is loss free and that the sources are all around the receivers. A recently developed method introduced interferometry by deconvolution that is insensitive to loss mechanisms by principle and requires sources only on one side of the receivers. In this pape...

  19. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  20. A new deconvolution approach to perfusion imaging exploiting spatial correlation

    Science.gov (United States)

    Orten, Burkay B.; Karl, W. Clem; Sahani, Dushyant V.; Pien, Homer

    2008-03-01

    The parts of the human body affected by a disease do not only undergo structural changes but also demonstrate significant physiological (functional) abnormalities. An important parameter that reveals the functional state of tissue is the flow of blood per unit tissue volume or perfusion, which can be obtained using dynamic imaging methods. One mathematical approach widely used for estimating perfusion from dynamic imaging data is based on a convolutional tissue-flow model. In these approaches, deconvolution of the observed data is necessary to obtain the important physiological parameters within a voxel. Although several alternatives have been proposed for deconvolution, all of them treat neighboring voxels independently and do not exploit the spatial correlation between voxels or the temporal correlation within a voxel over time. These simplistic approaches result in a noisy perfusion map with poorly defined region boundaries. In this paper, we propose a novel perfusion estimation method which incorporates spatial as well as temporal correlation into the deconvolution process. Performance of our method is compared to standard methods using independent voxel processing. Both simulated and real data experiments illustrate the potential of our method.

  1. Tissue-specific sparse deconvolution for brain CT perfusion.

    Science.gov (United States)

    Fang, Ruogu; Jiang, Haodi; Huang, Junzhou

    2015-12-01

    Enhancing perfusion maps in low-dose computed tomography perfusion (CTP) for cerebrovascular disease diagnosis is a challenging task, especially for low-contrast tissue categories where infarct core and ischemic penumbra usually occur. Sparse perfusion deconvolution has been recently proposed to effectively improve the image quality and diagnostic accuracy of low-dose perfusion CT by extracting the complementary information from the high-dose perfusion maps to restore the low-dose using a joint spatio-temporal model. However the low-contrast tissue classes where infarct core and ischemic penumbra are likely to occur in cerebral perfusion CT tend to be over-smoothed, leading to loss of essential biomarkers. In this paper, we propose a tissue-specific sparse deconvolution approach to preserve the subtle perfusion information in the low-contrast tissue classes. We first build tissue-specific dictionaries from segmentations of high-dose perfusion maps using online dictionary learning, and then perform deconvolution-based hemodynamic parameters estimation for block-wise tissue segments on the low-dose CTP data. Extensive validation on clinical datasets of patients with cerebrovascular disease demonstrates the superior performance of our proposed method compared to state-of-art, and potentially improve diagnostic accuracy by increasing the differentiation between normal and ischemic tissues in the brain.

  2. Deconvolution of interferometric data using interior point iterative algorithms

    Science.gov (United States)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  3. The deconvolution of differential scanning calorimetry unfolding transitions.

    Science.gov (United States)

    Spink, Charles H

    2015-04-01

    This paper is a review of a process for deconvolution of unfolding thermal transitions measured by differential scanning calorimetry. The mathematical background is presented along with illustrations of how the unfolding data is processed to resolve the number of sequential transitions needed to describe an unfolding mechanism and to determine thermodynamic properties of the intermediate states. Examples of data obtained for a simple two-state unfolding of a G-quadruplex DNA structure derived from the basic human telomere sequence, (TTAGGG)4TT are used to present some of the basic issues in treating the DSC data. A more complex unfolding mechanism is also presented that requires deconvolution of a multistate transition, the unfolding of a related human telomere structure, (TTAGGG)12 TT. The intent of the discussion is to show the steps in deconvolution, and to present the data at each step to help clarify how the information is derived from the various mathematical manipulations. Copyright © 2014. Published by Elsevier Inc.

  4. Global uniform risk bounds for wavelet deconvolution estimators

    CERN Document Server

    Lounici, Karim; 10.1214/10-AOS836

    2011-01-01

    We consider the statistical deconvolution problem where one observes $n$ replications from the model $Y=X+\\epsilon$, where $X$ is the unobserved random signal of interest and $\\epsilon$ is an independent random error with distribution $\\phi$. Under weak assumptions on the decay of the Fourier transform of $\\phi,$ we derive upper bounds for the finite-sample sup-norm risk of wavelet deconvolution density estimators $f_n$ for the density $f$ of $X$, where $f:\\mathbb{R}\\to \\mathbb{R}$ is assumed to be bounded. We then derive lower bounds for the minimax sup-norm risk over Besov balls in this estimation problem and show that wavelet deconvolution density estimators attain these bounds. We further show that linear estimators adapt to the unknown smoothness of $f$ if the Fourier transform of $\\phi$ decays exponentially and that a corresponding result holds true for the hard thresholding wavelet estimator if $\\phi$ decays polynomially. We also analyze the case where $f$ is a "supersmooth"/analytic density. We finall...

  5. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    Science.gov (United States)

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-03-21

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a non-linear mixed effects model in which the random effects associated with absorption represent a Wiener process. The present work compares, 1) stochastic deconvolution, and 2) numerical deconvolution, using clinical pharmacokinetic data generated for an IVIVC study of extended release (ER) formulations of a BCS class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (Fabs) versus time profiles when supplied with exactly the same externally-determined unit impulse response parameters. In a separate analysis a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting Fabs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis.

  6. A Statistician’s View on Deconvolution and Unfolding

    CERN Document Server

    Panaretos, Victor M

    2011-01-01

    We briefly review some of the basic features of unfolding problems from the point of view of the statistician. To illustrate these, we mostly concentrate on the particular instance of unfolding called deconvolution. We discuss the issue of ill-posedness, the bias-variance trade-off, and regularisation tuning, placing emphasis on the important class of kernel density estimators. We also briefly consider basic aspects of the more general unfolding problem and men- tion some of the points that where raised during the discussion session of the unfolding workshop.

  7. Blind Deconvolution in Nonminimum Phase Systems Using Cascade Structure

    Directory of Open Access Journals (Sweden)

    Liqing Zhang

    2007-01-01

    Full Text Available We introduce a novel cascade demixing structure for multichannel blind deconvolution in nonminimum phase systems. To simplify the learning process, we decompose the demixing model into a causal finite impulse response (FIR filter and an anticausal scalar FIR filter. A permutable cascade structure is constructed by two subfilters. After discussing geometrical structure of FIR filter manifold, we develop the natural gradient algorithms for both FIR subfilters. Furthermore, we derive the stability conditions of algorithms using the permutable characteristic of the cascade structure. Finally, computer simulations are provided to show good learning performance of the proposed method.

  8. Improvement of FISH mapping resolution on combed DNA molecules by iterative constrained deconvolution: a quantitative study.

    Science.gov (United States)

    Monier, K; Heliot, L; Rougeulle, C; Heard, E; Robert-Nicoud, M; Vourc'h, C; Bensimon, A; Usson, Y

    2001-01-01

    Image restoration approaches, such as digital deconvolution, are becoming widely used for improving the quality of microscopic images. However, no quantification of the gain in resolution of fluorescence images is available. We show that, after iterative constrained deconvolution, fluorescent cosmid signals appear to be 25% smaller, and 1.2-kb fragment signals on combed molecules faithfully display the expected length.

  9. Molecular dynamics in cytochrome c oxidase Moessbauer spectra deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Bossis, Fabrizio [Department of Medical Biochemistry, Medical Biology and Medical Physics (DIBIFIM), University of Bari ' Aldo Moro' , Bari (Italy); Palese, Luigi L., E-mail: palese@biochem.uniba.it [Department of Medical Biochemistry, Medical Biology and Medical Physics (DIBIFIM), University of Bari ' Aldo Moro' , Bari (Italy)

    2011-01-07

    Research highlights: {yields} Cytochrome c oxidase molecular dynamics serve to predict Moessbauer lineshape widths. {yields} Half height widths are used in modeling of Lorentzian doublets. {yields} Such spectral deconvolutions are useful in detecting the enzyme intermediates. -- Abstract: In this work low temperature molecular dynamics simulations of cytochrome c oxidase are used to predict an experimentally observable, namely Moessbauer spectra width. Predicted lineshapes are used to model Lorentzian doublets, with which published cytochrome c oxidase Moessbauer spectra were simulated. Molecular dynamics imposed constraints to spectral lineshapes permit to obtain useful information, like the presence of multiple chemical species in the binuclear center of cytochrome c oxidase. Moreover, a benchmark of quality for molecular dynamic simulations can be obtained. Despite the overwhelming importance of dynamics in electron-proton transfer systems, limited work has been devoted to unravel how much realistic are molecular dynamics simulations results. In this work, molecular dynamics based predictions are found to be in good agreement with published experimental spectra, showing that we can confidently rely on actual simulations. Molecular dynamics based deconvolution of Moessbauer spectra will lead to a renewed interest for application of this approach in bioenergetics.

  10. Parallel deconvolution of large 3D images obtained by confocal laser scanning microscopy.

    Science.gov (United States)

    Pawliczek, Piotr; Romanowska-Pawliczek, Anna; Soltys, Zbigniew

    2010-03-01

    Various deconvolution algorithms are often used for restoration of digital images. Image deconvolution is especially needed for the correction of three-dimensional images obtained by confocal laser scanning microscopy. Such images suffer from distortions, particularly in the Z dimension. As a result, reliable automatic segmentation of these images may be difficult or even impossible. Effective deconvolution algorithms are memory-intensive and time-consuming. In this work, we propose a parallel version of the well-known Richardson-Lucy deconvolution algorithm developed for a system with distributed memory and implemented with the use of Message Passing Interface (MPI). It enables significantly more rapid deconvolution of two-dimensional and three-dimensional images by efficiently splitting the computation across multiple computers. The implementation of this algorithm can be used on professional clusters provided by computing centers as well as on simple networks of ordinary PC machines.

  11. H∞ deconvolution filter design for time-delay linear continuous-time systems

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Proposes an H∞ deconvolution design for time-delay linear continuous-time systems. We first analyze the general structure and innovation structure of the H∞ deconvolution filter. The deconvolution filter with innovation structure is made up of an output observer and a linear mapping, where the latter reflects the internal connection between the unknown input signal and the output estimate error. Based on the bounded real lemma,a time domain design approach and a sufficient condition for the existence of deconvolution filter are presented.The parameterization of the deconvolution filter can be completed by solving a Riccati equation. The proposed method is useful for the case that does not require statistical information about disturbances. At last, a numerical example is given to demonstrate the performance of the proposed filter.

  12. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    Science.gov (United States)

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent

  13. Network deconvolution as a general method to distinguish direct dependencies in networks.

    Science.gov (United States)

    Feizi, Soheil; Marbach, Daniel; Médard, Muriel; Kellis, Manolis

    2013-08-01

    Recognizing direct relationships between variables connected in a network is a pervasive problem in biological, social and information sciences as correlation-based networks contain numerous indirect relationships. Here we present a general method for inferring direct effects from an observed correlation matrix containing both direct and indirect effects. We formulate the problem as the inverse of network convolution, and introduce an algorithm that removes the combined effect of all indirect paths of arbitrary length in a closed-form solution by exploiting eigen-decomposition and infinite-series sums. We demonstrate the effectiveness of our approach in several network applications: distinguishing direct targets in gene expression regulatory networks; recognizing directly interacting amino-acid residues for protein structure prediction from sequence alignments; and distinguishing strong collaborations in co-authorship social networks using connectivity information alone. In addition to its theoretical impact as a foundational graph theoretic tool, our results suggest network deconvolution is widely applicable for computing direct dependencies in network science across diverse disciplines.

  14. Deconvolution of magnetic acoustic change complex (mACC).

    Science.gov (United States)

    Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W

    2014-11-01

    The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes

  15. Reversible jump Markov chain Monte Carlo for deconvolution.

    Science.gov (United States)

    Kang, Dongwoo; Verotta, Davide

    2007-06-01

    To solve the problem of estimating an unknown input function to a linear time invariant system we propose an adaptive non-parametric method based on reversible jump Markov chain Monte Carlo (RJMCMC). We use piecewise polynomial functions (splines) to represent the input function. The RJMCMC algorithm allows the exploration of a large space of competing models, in our case the collection of splines corresponding to alternative positions of breakpoints, and it is based on the specification of transition probabilities between the models. RJMCMC determines: the number and the position of the breakpoints, and the coefficients determining the shape of the spline, as well as the corresponding posterior distribution of breakpoints, number of breakpoints, coefficients and arbitrary statistics of interest associated with the estimation problem. Simulation studies show that the RJMCMC method can obtain accurate reconstructions of complex input functions, and obtains better results compared with standard non-parametric deconvolution methods. Applications to real data are also reported.

  16. A novel blind deconvolution algorithm using single frequency bin

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Former frequency-domain blind devolution algorithms need to consider a large number of frequency bins and recover the sources in different orders and with different amplitudes in each frequency bin, so they suffer from permutation and amplitude indeterminacy troubles. Based on sliding discrete Fourier transform, the presented deconvolution algorithm can directly recover time-domain sources from frequency-domain convolutive model using single frequency bin. It only needs to execute blind separation of instantaneous mixture once there are no permutation and amplitude indeterminacy troubles. Compared with former algorithms, the algorithm greatly reduces the computation cost as only one frequency bin is considered. Its good and robust performance is demonstrated by simulations when the signal-to-noise-ratio is high.

  17. Transits against Fainter Stars: The Power of Image Deconvolution

    CERN Document Server

    Sackett, Penny D; Bayliss, Daniel D R; Weldrake, David T F; Tingley, Brandon; 10.1017/S1743921308026239

    2009-01-01

    Compared to bright star searches, surveys for transiting planets against fainter (V=12-18) stars have the advantage of much higher sky densities of dwarf star primaries, which afford easier detection of small transiting bodies. Furthermore, deep searches are capable of probing a wider range of stellar environments. On the other hand, for a given spatial resolution and transit depth, deep searches are more prone to confusion from blended eclipsing binaries. We present a powerful mitigation strategy for the blending problem that includes the use of image deconvolution and high resolution imaging. The techniques are illustrated with Lupus-TR-3 and very recent IR imaging with PANIC on Magellan. The results are likely to have implications for the CoRoT and KEPLER missions designed to detect transiting planets of terrestrial size.

  18. Euler Deconvolution with Improved Accuracy and Multiple Different Structural Indices

    Institute of Scientific and Technical Information of China (English)

    G R J Cooper

    2008-01-01

    Euler deconvolution is a semi-automatic interpretation method that is frequently used with magnetic and gravity data. For a given source type, which is specified by its structural index (SI), it provides an estimate of the source location. It is demonstrated here that by computing the solution space of individual data points and selecting common source locations the accuracy of the result can be improved. Furthermore, only a slight modification of the method is necessary to allow solutions for any number of different Sis to be obtained simultaneously. The method is applicable to both evenly and unevenly sampled geophysical data and is demonstrated on gravity and magnetic data. Source code (in Matlab format) is available from www.iamg.org.

  19. Bayesian Inference for Radio Observations - Going beyond deconvolution

    CERN Document Server

    Lochner, Michelle; Kunz, Martin; Natarajan, Iniyan; Oozeer, Nadeem; Smirnov, Oleg; Zwart, Jon

    2015-01-01

    Radio interferometers suffer from the problem of missing information in their data, due to the gaps between the antennas. This results in artifacts, such as bright rings around sources, in the images obtained. Multiple deconvolution algorithms have been proposed to solve this problem and produce cleaner radio images. However, these algorithms are unable to correctly estimate uncertainties in derived scientific parameters or to always include the effects of instrumental errors. We propose an alternative technique called Bayesian Inference for Radio Observations (BIRO) which uses a Bayesian statistical framework to determine the scientific parameters and instrumental errors simultaneously directly from the raw data, without making an image. We use a simple simulation of Westerbork Synthesis Radio Telescope data including pointing errors and beam parameters as instrumental effects, to demonstrate the use of BIRO.

  20. Envelope based nonlinear blind deconvolution approach for ultrasound imaging

    Directory of Open Access Journals (Sweden)

    L.T. Chira

    2012-06-01

    Full Text Available The resolution of ultrasound medical images is yet an important problem despite of the researchers efforts. In this paper we presents a nonlinear blind deconvolution to eliminate the blurring effect based on the measured radio-frequency signal envelope. This algorithm is executed in two steps. Firslty we make an estimation for Point Spread Function (PSF and, secondly we use the estimated PSF to remove, iteratively their effect. The proposed algorithm is a greedy algorithm, called also matching pursuit or CLEAN. The use of this algorithm is motivated beacause theorically it avoid the so called inverse problem, which usually needs regularization to obtain an optimal solution. The results are presented using 1D simulated signals in term of visual evaluation and nMSE in comparison with the two most kwown regularisation solution methods for least square problem, Thikonov regularization or l2-norm and Total Variation or l1 norm.

  1. Performance of Deconvolution Methods in Estimating CBOC-Modulated Signals

    Directory of Open Access Journals (Sweden)

    Danai Skournetou

    2011-01-01

    Full Text Available Multipath propagation is one of the most difficult error sources to compensate in global navigation satellite systems due to its environment-specific nature. In order to gain a better understanding of its impact on the received signal, the establishment of a theoretical performance limit can be of great assistance. In this paper, we derive the Cramer Rao lower bounds (CRLBs, where in one case, the unknown parameter vector corresponds to any of the three multipath signal parameters of carrier phase, code delay, and amplitude, and in the second case, all possible combinations of joint parameter estimation are considered. Furthermore, we study how various channel parameters affect the computed CRLBs, and we use these bounds to compare the performance of three deconvolution methods: least squares, minimum mean square error, and projection onto convex space. In all our simulations, we employ CBOC modulation, which is the one selected for future Galileo E1 signals.

  2. Deconvolution of petroleum mixtures using mid-FTIR analysis and non-negative matrix factorization

    Science.gov (United States)

    Livanos, George; Zervakis, Michalis; Pasadakis, Nikos; Karelioti, Marouso; Giakos, George

    2016-11-01

    The aim of this study is to develop an efficient, robust and cost effective methodology capable of both identifying the chemical fractions in complex commercial petroleum products and numerically estimating their concentration within the mixture sample. We explore a methodology based on attenuated total reflectance fourier transform infrared (ATR-FTIR) analytical signals, combined with a modified factorization algorithm to solve this ‘mixture problem’, first in qualitative and then in quantitative mode. The proposed decomposition approach is self-adapting to data without prior knowledge and is able of accurately estimating the weight contributions of constituents in the entire chemical compound. The results of the presented work to petroleum analysis indicate that it is possible to deconvolve the mixing process and recover the content in a chemically complex petroleum mixture using the infrared signals of a limited number of samples and the principal substances forming the mixture. A focus application of the proposed methodology is the quality control of commercial gasoline by identifying and quantifying the individual fractions utilized for its formulation via a fast, robust and efficient procedure based on mathematical analysis of the acquired spectra.

  3. Non-stationary blind deconvolution of medical ultrasound scans

    Science.gov (United States)

    Michailovich, Oleg V.

    2017-03-01

    In linear approximation, the formation of a radio-frequency (RF) ultrasound image can be described based on a standard convolution model in which the image is obtained as a result of convolution of the point spread function (PSF) of the ultrasound scanner in use with a tissue reflectivity function (TRF). Due to the band-limited nature of the PSF, the RF images can only be acquired at a finite spatial resolution, which is often insufficient for proper representation of the diagnostic information contained in the TRF. One particular way to alleviate this problem is by means of image deconvolution, which is usually performed in a "blind" mode, when both PSF and TRF are estimated at the same time. Despite its proven effectiveness, blind deconvolution (BD) still suffers from a number of drawbacks, chief among which stems from its dependence on a stationary convolution model, which is incapable of accounting for the spatial variability of the PSF. As a result, virtually all existing BD algorithms are applied to localized segments of RF images. In this work, we introduce a novel method for non-stationary BD, which is capable of recovering the TRF concurrently with the spatially variable PSF. Particularly, our approach is based on semigroup theory which allows one to describe the effect of such a PSF in terms of the action of a properly defined linear semigroup. The approach leads to a tractable optimization problem, which can be solved using standard numerical methods. The effectiveness of the proposed solution is supported by experiments with in vivo ultrasound data.

  4. Deconvolution of IRTF Observations of Jupiter's Moon Io

    Science.gov (United States)

    Wernher, Hannah; Rathbun, Julie A.; Spencer, John R.

    2016-10-01

    Io is a active volcanic world with a heat output more than 40 times that of earth. While spacecraft have been used to study Io's volcanoes, their high level of variability requires Earth-based observations to reveal their eruptions in the absence of spacecraft data. Our nearly 20 years of observations from the NASA InfraRed Telescope Facility (IRTF) have been used to monitor volcanic eruptions on Io. Our observations allow us not only to better understand the eruption properties of Ionian volcanoes, but also how the volcanic eruptions affect the rest of the Jovian system, such as the Io plasma torus, sodium clouds, Jovian magnetosphere, and aurorae. While our Jupiter occultation lightcurves of an eclipsed Io have been the focus of this program, due to their ability to determine volcano brightnesses and 1D locations, those observations only allow us to measure volcanic eruptions on the sub-Jovian hemisphere. We also observe Io in reflected sunlight so that we can observe other longitudes on Io. But, brighter eruptions are required for us to be able to distinguish them above the reflected sunlight. We are able to increase the spatial resolution of these images of in order to detect and locate fainter hotspots. We have employed shift-and-add techniques using multiple short exposures to detect eruptions in the past (Rathbun and Spencer, 2010). We will report on the use of publically available deconvolution algorithms to further improve spatial resolution and hot spot detectability, using images of a standard star as our PSF, including experiments with performing the deconvolution both before and after shift and add. We will present results of observations from 2007 and 2013.

  5. Generalized TV and sparse decomposition of the ultrasound image deconvolution model based on fusion technology.

    Science.gov (United States)

    Wen, Qiaonong; Wan, Suiren

    2013-01-01

    Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.

  6. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  7. The adaptive-loop-gain adaptive-scale CLEAN deconvolution of radio interferometric images

    CERN Document Server

    Zhang, L; Liu, X

    2016-01-01

    CLEAN algorithms are a class of deconvolution solvers which are widely used to remove the effect of the telescope Point Spread Function (PSF). Loop gain is one important parameter in CLEAN algorithms. Currently the parameter is fixed during deconvolution, which restricts the performance of CLEAN algorithms. In this paper, we propose a new deconvolution algorithm with an adaptive loop gain scheme, which is referred to as the adaptive-loop-gain adaptivescale CLEAN (Algas-Clean) algorithm. The test results show that the new algorithm can give a more accurate model with faster convergence.

  8. IMAGE DE-BLURRING USING WIENER DE-CONVOLUTION AND WAVELET FOR DIFFERENT BLURRING KERNEL

    OpenAIRE

    M.Tech Research Scholar Shuchi Singh*, Asst Professor Vipul Awasthi, Asst Professor NitinSahu

    2016-01-01

    Image de-convolution is an active research area of recovering a sharp image after blurring by a convolution. One of the problems in image de-convolution is how to preserve the texture structures while removing blur in presence of noise. Various methods have been used for such as gradient based methods, sparsity based methods, and nonlocal self-similarity methods. In this thesis, we have used the conventional non-blind method of Wiener de-convolution. Further Wavelet denoising has been used to...

  9. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  10. Analytical Approximation of the Deconvolution of Strongly Overlapping Broad Fluorescence Bands

    Science.gov (United States)

    Dubrovkin, J. M.; Tomin, V. I.; Ushakou, D. V.

    2016-09-01

    A method for deconvoluting strongly overlapping spectral bands into separate components that enables the uniqueness of the deconvolution procedure to be monitored was proposed. An asymmetric polynomial-modified function subjected to Fourier filtering (PMGFS) that allowed more accurate and physically reasonable band shapes to be obtained and also improved significantly the deconvolution convergence was used as the band model. The method was applied to the analysis of complexation in solutions of the molecular probe 4'-(diethylamino)-3-hydroxyflavone with added LiCl. Two-band fluorescence of the probe in such solutions was the result of proton transfer in an excited singlet state and overlapped strongly with stronger spontaneous emission of complexes with the ions. Physically correct deconvolutions of overlapping bands could not always be obtained using available software.

  11. Approximate deconvolution large eddy simulation of a barotropic ocean circulation model

    CERN Document Server

    San, Omer; Wang, Zhu; Iliescu, Traian

    2011-01-01

    This paper puts forth a new large eddy simulation closure modeling strategy for two-dimensional turbulent geophysical flows. This closure modeling approach utilizes approximate deconvolution, which is based solely on mathematical approximations and does not employ phenomenological arguments, such as the concept of an energy cascade. The new approximate deconvolution model is tested in the numerical simulation of the wind-driven circulation in a shallow ocean basin, a standard prototype of more realistic ocean dynamics. The model employs the barotropic vorticity equation driven by a symmetric double-gyre wind forcing, which yields a four-gyre circulation in the time mean. The approximate deconvolution model yields the correct four-gyre circulation structure predicted by a direct numerical simulation, on a much coarser mesh but at a fraction of the computational cost. This first step in the numerical assessment of the new model shows that approximate deconvolution could represent a viable alternative to standar...

  12. Improving spatial resolution in fiber Raman distributed temperature sensor by using deconvolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Lei Zhang; Xue Feng; Wei Zhang; Xiaoming Liu

    2009-01-01

    The deconvolution algorithm is adopted on the fiber Raman distributed temperature sensor (FRDTS) to improve the spatial resolution without reducing the pulse width of the light source. Numerical simulation shows that the spatial resolution is enhanced by four times using the frequency-domain deconvolution algorithm with high temperature accuracy. In experiment, a spatial resolution of 15 m is realized using a master oscillator power amplifier light source with 300-ns pulse width. In addition, the dispersion-induced limitation of the minimum spatial resolution achieved by deconvolution algorithm is analyzed. The results indicate that the deconvolution algorithm is a beneficial complement for the FRDTS to realize accurate locating and temperature monitoring for sharp temperature variations.

  13. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  14. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  15. Including frequency-dependent attenuation for the deconvolution of ultrasonic signals

    OpenAIRE

    Carcreff, Ewen; Bourguignon, Sébastien; Idier, Jérôme; Simon, Laurent; Duclos, Aroune

    2013-01-01

    International audience; Ultrasonic non-destructive testing (NDT) is a standard process for detecting flaws or discontinuities in industrial parts. A pulse is emitted by an ultrasonic transducer through a material, and a reflected wave is produced at each impedance change. In many cases, echoes can overlap in the received signal and deconvolution can be applied to perform echo separation and to enhance the resolution. Common deconvolution techniques assume that the shape of the echoes is invar...

  16. Optimum deconvolution algorithm for system with multiplicative white noise and additive correlative noise

    Institute of Scientific and Technical Information of China (English)

    王会立; 陈希信

    2004-01-01

    The optimum state filter and fixed-interval smoother and the optimum deconvolution algorithm for system with multiplicative noise are derived upon the condition that the dynamic noise correlates itself in one-step and correlates with the measurement noise at the present step as well as one past step, and the multiplicative noise is white and statistically independent of the dynamic noise and the measurement noise. A simulation example demonstrates the effectiveness of the above-mentioned deconvolution algorithm.

  17. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    Science.gov (United States)

    2015-06-08

    AFRL-AFOSR-UK-TR-2015-0032 Successive Over- Relaxation Technique for High-Performance Blind Image Deconvolution Sergey V... Relaxation Technique for High-Performance Blind Image Deconvolution 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA8655-13-1-3034 5c. PROGRAM...Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39-18 AWARD NO. FA8655-13-1-3034 Successive Over- Relaxation Technique

  18. Computational expression deconvolution in a complex mammalian organ

    Directory of Open Access Journals (Sweden)

    Master Stephen R

    2006-07-01

    Full Text Available Abstract Background Microarray expression profiling has been widely used to identify differentially expressed genes in complex cellular systems. However, while such methods can be used to directly infer intracellular regulation within homogeneous cell populations, interpretation of in vivo gene expression data derived from complex organs composed of multiple cell types is more problematic. Specifically, observed changes in gene expression may be due either to changes in gene regulation within a given cell type or to changes in the relative abundance of expressing cell types. Consequently, bona fide changes in intrinsic gene regulation may be either mimicked or masked by changes in the relative proportion of different cell types. To date, few analytical approaches have addressed this problem. Results We have chosen to apply a computational method for deconvoluting gene expression profiles derived from intact tissues by using reference expression data for purified populations of the constituent cell types of the mammary gland. These data were used to estimate changes in the relative proportions of different cell types during murine mammary gland development and Ras-induced mammary tumorigenesis. These computational estimates of changing compartment sizes were then used to enrich lists of differentially expressed genes for transcripts that change as a function of intrinsic intracellular regulation rather than shifts in the relative abundance of expressing cell types. Using this approach, we have demonstrated that adjusting mammary gene expression profiles for changes in three principal compartments – epithelium, white adipose tissue, and brown adipose tissue – is sufficient both to reduce false-positive changes in gene expression due solely to changes in compartment sizes and to reduce false-negative changes by unmasking genuine alterations in gene expression that were otherwise obscured by changes in compartment sizes. Conclusion By adjusting

  19. Wavenumber-frequency deconvolution of aeroacoustic microphone phased array data of arbitrary coherence

    Science.gov (United States)

    Bahr, Christopher J.; Cattafesta, Louis N.

    2016-11-01

    Deconvolution of aeroacoustic data acquired with microphone phased arrays is a computationally challenging task for distributed sources with arbitrary coherence. A new technique for performing such deconvolution is proposed. This technique relies on analysis of the array data in the wavenumber-frequency domain, allowing for fast convolution and reduced storage requirements when compared to traditional coherent deconvolution. A positive semidefinite constraint for the iterative deconvolution procedure is implemented and shows improved behavior in terms of quantifiable convergence metrics when compared to a standalone covariance inequality constraint. A series of simulations validates the method's ability to resolve coherence and phase angle relationships between partially coherent sources, as well as determines convergence criteria for deconvolution analysis. Simulations for point sources near the microphone phased array show potential for handling such data in the wavenumber-frequency domain. In particular, a physics-based integration boundary calculation is described, and can successfully isolate sources and track the appropriate integration bounds with and without the presence of flow. Magnitude and phase relationships between multiple sources are successfully extracted. Limitations of the deconvolution technique are determined from the simulations, particularly in the context of a simulated acoustic field in a closed test section wind tunnel with strong boundary layer contamination. A final application to a trailing edge noise experiment conducted in an open-jet wind tunnel matches best estimates of acoustic levels from traditional calculation methods and qualitatively assesses the coherence characteristics of the trailing edge noise source.

  20. PERT: a method for expression deconvolution of human blood samples from varied microenvironmental and developmental conditions.

    Directory of Open Access Journals (Sweden)

    Wenlian Qiao

    Full Text Available The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells. Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity.

  1. PERT: a method for expression deconvolution of human blood samples from varied microenvironmental and developmental conditions.

    Science.gov (United States)

    Qiao, Wenlian; Quon, Gerald; Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity.

  2. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  3. Method of computerized glow curve deconvolution for analysing thermoluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, T [Division of General Education, Ashikaga Institute of Technology, Omae-cho 268-1, Ashikaga, Tochigi 326-8558 (Japan); Gartia, R K [Luminescence Dating Laboratory, Manipur University, Imphal 795003 (India)

    2003-11-07

    The conventional worldwide accepted method of computerized glow curve deconvolution based on the general order kinetics formalism has two fatal defects in systems where the trapping levels (two or more) have non-zero retrapping probability. The first one is ignoring the thermal connectivity between thermoluminescence (TL) peaks. This arises from the fact that under such a situation electrons trapped at one trapping level, once activated, can be retrapped in another thermally connected level via the conduction band during the recording of the glow curve. The other is the impossibility of obtaining a global minimum, in fitting the experimental TL with the theoretical one with existing techniques. This paper aims to provide answers to these defects. The first one can be overcome by resorting to rigorous analysis using appropriate mathematical rate equations describing the flow of charge carriers. Though the second defect cannot be overcome completely, one can obtain a reasonable fit, which may not be unique. The algorithm is tested for synthetic as well as experimental glow curves.

  4. Multiframe Blind Super Resolution Imaging Based on Blind Deconvolution

    Institute of Scientific and Technical Information of China (English)

    元伟; 张立毅

    2016-01-01

    As an ill-posed problem, multiframe blind super resolution imaging recovers a high resolution image from a group of low resolution images with some degradations when the information of blur kernel is limited. Note that the quality of the recovered image is influenced more by the accuracy of blur estimation than an advanced regularization. We study the traditional model of the multiframe super resolution and modify it for blind deblurring. Based on the analysis, we proposed two algorithms. The first one is based on the total variation blind deconvolution algorithm and formulated as a functional for optimization with the regularization of blur. Based on the alternating minimization and the gradient descent algorithm, the high resolution image and the unknown blur kernel are esti-mated iteratively. By using the median shift and add operator, the second algorithm is more robust to the outlier influence. The MSAA initialization simplifies the interpolation process to reconstruct the blurred high resolution image for blind deblurring and improves the accuracy of blind super resolution imaging. The experimental results demonstrate the superiority and accuracy of our novel algorithms.

  5. Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization

    Science.gov (United States)

    Moradi, Hamid; Tang, Shuo; Salcudean, Septimiu E.

    2016-03-01

    We define a deconvolution based photoacoustic reconstruction with sparsity regularization (DPARS) algorithm for image restoration from projections. The proposed method is capable of visualizing tissue in the presence of constraints such as the specific directivity of sensors and limited-view Photoacoustic Tomography (PAT). The directivity effect means that our algorithm treats the optically-generated ultrasonic waves based on which direction they arrive at the transducer. Most PA image reconstruction methods assume that sensors have omni-directional response; however, in practice, the sensors show higher sensitivity to the ultrasonic waves coming from one specific direction. In DPARS, the sensitivity of the transducer to incoming waves from different directions are considered. Thus, the DPARS algorithm takes into account the relative location of the absorbers with respect to the transducers, and generates a linear system of equations to solve for the distribution of absorbers. The numerical conditioning and computing times are improved by the use of a sparse Discrete Fourier Transform (DCT) representation of the distribution of absorption coefficients. Our simulation results show that DPARS outperforms the conventional Delay-and-Sum reconstruction method in terms of CNR and RMS errors. Experimental results confirm that DPARS provides images with higher resolution than DAS.

  6. Blind image deconvolution using a robust GCD approach.

    Science.gov (United States)

    Pillai, S U; Liang, B

    1999-01-01

    In this correspondence, a new viewpoint is proposed for estimating an image from its distorted versions in presence of noise without the a priori knowledge of the distortion functions. In z-domain, the desired image can be regarded as the greatest common polynomial divisor among the distorted versions. With the assumption that the distortion filters are finite impulse response (FIR) and relatively coprime, in the absence of noise, this becomes a problem of taking the greatest common divisor (GCD) of two or more two-dimensional (2-D) polynomials. Exact GCD is not desirable because even extremely small variations due to quantization error or additive noise can destroy the integrity of the polynomial system and lead to a trivial solution. Our approach to this blind deconvolution approximation problem introduces a new robust interpolative 2-D GCD method based on a one-dimensional (1-D) Sylvester-type GCD algorithm. Experimental results with both synthetically blurred images and real motion-blurred pictures show that it is computationally efficient and moderately noise robust.

  7. Deconvolution of the energy loss function of the KATRIN experiment

    Science.gov (United States)

    Hannen, V.; Heese, I.; Weinheimer, C.; Sejersen Riis, A.; Valerius, K.

    2017-03-01

    The KATRIN experiment aims at a direct and model independent determination of the neutrino mass with 0.2 eV/c2 sensitivity (at 90% C.L.) via a measurement of the endpoint region of the tritium beta-decay spectrum. The main components of the experiment are a windowless gaseous tritium source (WGTS), differential and cryogenic pumping sections and a tandem of a pre- and a main-spectrometer, applying the concept of magnetic adiabatic collimation with an electrostatic retardation potential to analyze the energy of beta decay electrons and to guide electrons passing the filter onto a segmented silicon PIN detector. One of the important systematic uncertainties of such an experiment are due to energy losses of β-decay electrons by elastic and inelastic scattering off tritium molecules within the source volume which alter the shape of the measured spectrum. To correct for these effects an independent measurement of the corresponding energy loss function is required. In this work we describe a deconvolution method to extract the energy loss function from measurements of the response function of the experiment at different column densities of the WGTS using a monoenergetic electron source.

  8. Quantitative polymerase chain reaction analysis by deconvolution of internal standard.

    Science.gov (United States)

    Hirakawa, Yasuko; Medh, Rheem D; Metzenberg, Stan

    2010-04-29

    Quantitative Polymerase Chain Reaction (qPCR) is a collection of methods for estimating the number of copies of a specific DNA template in a sample, but one that is not universally accepted because it can lead to highly inaccurate (albeit precise) results. The fundamental problem is that qPCR methods use mathematical models that explicitly or implicitly apply an estimate of amplification efficiency, the error of which is compounded in the analysis to unacceptable levels. We present a new method of qPCR analysis that is efficiency-independent and yields accurate and precise results in controlled experiments. The method depends on a computer-assisted deconvolution that finds the point of concordant amplification behavior between the "unknown" template and an admixed amplicon standard. We apply the method to demonstrate dexamethasone-induced changes in gene expression in lymphoblastic leukemia cell lines. This method of qPCR analysis does not use any explicit or implicit measure of efficiency, and may therefore be immune to problems inherent in other qPCR approaches. It yields an estimate of absolute initial copy number of template, and controlled tests show it generates accurate results.

  9. Quantitative polymerase chain reaction analysis by deconvolution of internal standard

    Directory of Open Access Journals (Sweden)

    Metzenberg Stan

    2010-04-01

    Full Text Available Abstract Background Quantitative Polymerase Chain Reaction (qPCR is a collection of methods for estimating the number of copies of a specific DNA template in a sample, but one that is not universally accepted because it can lead to highly inaccurate (albeit precise results. The fundamental problem is that qPCR methods use mathematical models that explicitly or implicitly apply an estimate of amplification efficiency, the error of which is compounded in the analysis to unacceptable levels. Results We present a new method of qPCR analysis that is efficiency-independent and yields accurate and precise results in controlled experiments. The method depends on a computer-assisted deconvolution that finds the point of concordant amplification behavior between the "unknown" template and an admixed amplicon standard. We apply the method to demonstrate dexamethasone-induced changes in gene expression in lymphoblastic leukemia cell lines. Conclusions This method of qPCR analysis does not use any explicit or implicit measure of efficiency, and may therefore be immune to problems inherent in other qPCR approaches. It yields an estimate of absolute initial copy number of template, and controlled tests show it generates accurate results.

  10. Sparse maximum harmonics-to-noise-ratio deconvolution for weak fault signature detection in bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Xu, Xiaoqiang

    2016-10-01

    De-noising and enhancement of the weak fault signature from the noisy signal are crucial for fault diagnosis, as features are often very weak and masked by the background noise. Deconvolution methods have a significant advantage in counteracting the influence of the transmission path and enhancing the fault impulses. However, the performance of traditional deconvolution methods is greatly affected by some limitations, which restrict the application range. Therefore, this paper proposes a new deconvolution method, named sparse maximum harmonics-noise-ratio deconvolution (SMHD), that employs a novel index, the harmonics-to-noise ratio (HNR), to be the objective function for iteratively choosing the optimum filter coefficients to maximize HNR. SMHD is designed to enhance latent periodic impulse faults from heavy noise signals by calculating the HNR to estimate the period. A sparse factor is utilized to further suppress the noise and improve the signal-to-noise ratio of the filtered signal in every iteration step. In addition, the updating process of the sparse threshold value and the period guarantees the robustness of SMHD. On this basis, the new method not only overcomes the limitations associated with traditional deconvolution methods, minimum entropy deconvolution (MED) and maximum correlated kurtosis deconvolution (MCKD), but visual inspection is also better, even if the fault period is not provided in advance. Moreover, the efficiency of the proposed method is verified by simulations and bearing data from different test rigs. The results show that the proposed method is effective in the detection of various bearing faults compared with the original MED and MCKD.

  11. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  12. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  13. Optimizing performance of the deconvolution model reduction for large ODE systems

    CERN Document Server

    Barannyk, Lyudmyla L

    2013-01-01

    We investigate the numerical performance of the regularized deconvolution closure introduced recently by the authors. The purpose of the closure is to furnish constitutive equations for Irwing-Kirkwood-Noll procedure, a well known method for deriving continuum balance equations from the Newton's equations of particle dynamics. A version of this procedure used in the paper relies on spatial averaging developed by Hardy, and independently by Murdoch and Bedeaux. The constitutive equations for the stress are given as a sum of several operator terms acting on the mesoscale average density and velocity. Each term is a "convolution sandwich" containing the deconvolution operator, a composition or a product operator, and the convolution (averaging) operator. Deconvolution is constructed using filtered regularization methods from the theory of ill-posed problems. The purpose of regularization is to ensure numerical stability. The particular technique used for numerical experiments is truncated singular value decompos...

  14. Deconvolution methods based on φHL regularization for spectral recovery.

    Science.gov (United States)

    Zhu, Hu; Deng, Lizhen; Bai, Xiaodong; Li, Meng; Cheng, Zhao

    2015-05-10

    The recorded spectra often suffer noise and band overlapping, and deconvolution methods are always used for spectral recovery. However, during the process of spectral recovery, the details cannot always be preserved. To solve this problem, two regularization terms are introduced and proposed. First, the conditions on the regularization term are analyzed for smoothing noise and preserving detail, and according to these conditions, φHL regularization is introduced into the spectral deconvolution model. In view of the deficiency of φHL under noisy condition, adaptive φHL regularization (φAHL) is proposed. Then semi-blind deconvolution methods based on φHL regularization (SBD-HL) and based on adaptive φHL regularization (SBD-AHL) are proposed, respectively. The simulation experimental results indicate that the proposed SBD-HL and SBD-AHL methods have better recovery, and SBD-AHL is superior to SBD-HL, especially in the noisy case.

  15. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  16. Data enhancement and analysis through mathematical deconvolution of signals from scientific measuring instruments

    Science.gov (United States)

    Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.

    1981-01-01

    Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.

  17. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    Energy Technology Data Exchange (ETDEWEB)

    Banyasz, Akos [Department of Physical Chemistry, Eoetvoes University, P.O. Box. 32, H-1518 Budapest 112 (Hungary); Dancs, Gabor [Department of Physical Chemistry, Eoetvoes University, P.O. Box. 32, H-1518 Budapest 112 (Hungary); Keszei, Erno [Department of Physical Chemistry, Eoetvoes University, P.O. Box. 32, H-1518 Budapest 112 (Hungary)]. E-mail: keszei@chem.elte.hu

    2005-11-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown.

  18. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    A novel method of image processing is presented which relies on deconvolution of data using the response function of the apparatus. It is revealed that all the surface structures observed by digital imaging are generated by a convolution of the response function of the apparatus with the surfaces......’ nanomorphology, which provided images of convoluted physical structures rather than images of real physical structures. In order to restore the genuine physical information on surface structures, a deconvolution using a novel response function of the feedback circuitry is required. At the highest resolution......, that is, atomic resolution, the effect of deconvolution is at its maximum, whereas images at lower resolution are sharpened by eliminating smoothing effects and shadow effects. The method is applied to measurements of imaging by in situ scanning tunnelling microscopy (in situ STM) at atomic resolution...

  19. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    Science.gov (United States)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  20. [Non-target screening of organic pollutants in sediments and sludges using gas chromatography-mass spectrometry and automated mass spectral deconvolution].

    Science.gov (United States)

    Wang, Gang; Ma, Huilian; Wang, Longxing; Chen, Jiping; Hou, Xiaohong

    2015-12-01

    A screening method in the combination of ultrasonic extraction, gas chromatography-mass spectrometry detection and automated mass spectrometry deconvolution technique was developed for non-target screening of non-polar and weak polar pollutants in sediments and sludges. The samples were extracted by ultrasonication for 20 min using dichloromethane for three times. The extraction solutions were cleaned-up by gel permeation chromatography and a silica gel column, and then 3 g of copper powder was used to remove the sulfur by ultrasonication for 10 min. Parallel experiments were carried out for 5 times and the RSDs were ranged from 5.8% to 14.9%. Automated mass spectral deconvolution & identification system (AMDIS) would improve the resolution of overlapping peaks, and identify the pure mass spectrum of the analytes in the cases of stronger background interference and co-extracted substances covering. Standard spectrum databases, such as NISTDRUG, NISTEPA, NISTFDA, Mass Spectral Library, etc, would qualitatively identify the organic pollutants in the samples. As a result, a total of 290 organic pollutants were identified, of which 190 and 153 pollutants were found in sediments and sludges, respectively. The identified pollutants included the Environmental Protection Agency (EPA) priority pollutants, pharmaceuticals, herbicides, antioxidants, intermediates, organic solvents and chemical raw materials. The proposed method is proved to be a promising one for non-target screening of complex matrix samples with the advantages of higher sensitivity and better repeatability.

  1. A Note on the Asymptotic Normality of the Kernel Deconvolution Density Estimator with Logarithmic Chi-Square Noise

    Directory of Open Access Journals (Sweden)

    Yang Zu

    2015-07-01

    Full Text Available This paper studies the asymptotic normality for the kernel deconvolution estimator when the noise distribution is logarithmic chi-square; both identical and independently distributed observations and strong mixing observations are considered. The dependent case of the result is applied to obtain the pointwise asymptotic distribution of the deconvolution volatility density estimator in discrete-time stochastic volatility models.

  2. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    Science.gov (United States)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  3. High order statistics based blind deconvolution of bi-level images with unknown intensity values.

    Science.gov (United States)

    Kim, Jeongtae; Jang, Soohyun

    2010-06-07

    We propose a novel linear blind deconvolution method for bi-level images. The proposed method seeks an optimal point spread function and two parameters that maximize a high order statistics based objective function. Unlike existing minimum entropy deconvolution and least squares minimization methods, the proposed method requires neither unrealistic assumption that the pixel values of a bi-level image are independently identically distributed samples of a random variable nor tuning of regularization parameters.We demonstrate the effectiveness of the proposed method in simulations and experiments.

  4. Reduction of blooming artifacts in cardiac CT images by blind deconvolution and anisotropic diffusion filtering

    Science.gov (United States)

    Castillo-Amor, Angélica M.; Navarro-Navia, Cristian A.; Cadena-Bonfanti, Alberto J.; Contreras-Ortiz, Sonia H.

    2015-12-01

    Even though CT is an imaging technique that offers high quality images, limitations on its spatial resolution cause blurring in small objects with high contrast. This phenomenon is known as blooming artifact and affects cardiac images with small calcifications and stents. This paper describes an approach to reduce the blooming artifact and improve resolution in cardiac images using blind deconvolution and anisotropic diffusion filtering. Deconvolution increases resolution but reduces signal-to-noise ratio, and the anisotropic diffusion filter counteracts this effect without affecting the edges in the image.

  5. Improved spherical deconvolution to solve fiber crossing in diffusion-weighted MR Imaging.

    Science.gov (United States)

    Toselli, Benedetta; Franchin, Cristina; Scifo, Paola; Rizzo, Giovanna

    2015-08-01

    An improved spherical deconvolution algorithm to solve fiber crossing in diffusion magnetic resonance imaging is here presented. The introduction of a regularization parameter on the reconstruction of the fibers directions allows to consider the deconvolution as a constrained least squares problem and enforces the normalization of the reconstructed directions. Moreover a new automatic stopping criterion is implemented which allows to push the algorithm to convergence. These modifications improve significantly the performance of the algorithm, decreasing the resolution limit and reconstructing better profiles of the fibers.

  6. Deconvolution method in designing freeform lens array for structured light illumination.

    Science.gov (United States)

    Ma, Donglin; Feng, Zexin; Liang, Rongguang

    2015-02-10

    We have developed a deconvolution freeform lens array design approach to generate high-contrast structured light illumination patterns. This method constructs the freeform lens array according to the point response obtained by deconvoluting the prescribed illumination pattern with the blur response of the extended light source. This design method is more effective in designing a freeform lens array to achieve accurate structured light patterns. For a sinusoidal fringe pattern, the contrast ratio can be as high as 97%, compared to 62% achieved by the conventional ray mapping method.

  7. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    Science.gov (United States)

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  8. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  9. Application of deconvolution to hydrocarbons concentration measurement correction; Application de la deconvolution a la correction de mesure de concentration d'hydrocarbures

    Energy Technology Data Exchange (ETDEWEB)

    Sekko, E.; Boukrouche, A.; Neveux, Ph.; Thomas, G. [Lyon-1 Univ., LAGEP, UPRES-A CNRS Q 5007, 69 (France)

    1999-07-01

    In this paper, the unburned hydrocarbons (belched out from a fuel boiler) concentration estimation problem is treated. In order to verify whether a boiler is confirm in regard with European standards, recorded data had been processes. This data processing has permitted to develop a deconvolution method combining both optimal filtering and optimal control. The application of this technique to the available data has permitted to verify the boiler conformity to the new European standards. (authors)

  10. Combined Wavelet Transform with Curve-fitting for Objective Optimization of the Parameters in Fourier Self-deconvolution

    Institute of Scientific and Technical Information of China (English)

    张秀琦; 郑建斌; 高鸿

    2001-01-01

    Fourier self-deconvolution was the most effective technique in resolving overlapping bands, in which deconvolution function results in deconvolution and apodization smoothes the magnified noise. Yet, the choice of the original half-width of each component and breaking point for truncation is often very subjective. In this paper, the method of combined wavelet transform with curve fitting was described with the advantages of an enhancement of signal to noise ratio as well as the improved fitting condition, and was applied to objective optimization of the o riginal half-widths of components in unresolved bands for Fourier self-deconvolution. Again, a noise was separated from a noisy signal by wavelet transform,therefore, the breaking point of apodization function can be determined directly in frequency domain. Accordingly, some artifacts in Fourier self-deconvolution were minimized significantly.

  11. Isotope pattern deconvolution as rising tool for isotope tracer studies in environmental research

    Science.gov (United States)

    Irrgeher, Johanna; Zitek, Andreas; Prohaska, Thomas

    2014-05-01

    During the last decade stable isotope tracers have emerged as versatile tool in ecological research. Besides 'intrinsic' isotope tracers caused by the natural variation of isotopes, the intentional introduction of 'extrinsic' enriched stable isotope tracers into biological systems has gained significant interest. Hereby the induced change in the natural isotopic composition of an element allows amongst others for studying the fate and fluxes of metals, trace elements and species in organisms or provides an intrinsic marker or tag of particular biological samples. Due to the shoreless potential of this methodology, the number of publications dealing with applications of isotope (double) spikes as tracers to address research questions in 'real world systems' is constantly increasing. However, some isotope systems like the natural Sr isotopic system, although potentially very powerful for this type of application, are still rarely used, mainly because their adequate measurement/determination poses major analytical challenges; as e.g. Sr is available in significant amounts in natural samples. In addition, biological systems underlie complex processes such as metabolism, adsorption/desorption or oxidation/reduction. As a consequence, classic evaluation approaches such as the isotope dilution mass spectrometry equation are often not applicable because of the unknown amount of tracer finally present in the sample. Isotope pattern deconvolution (IPD), based on multiple linear regression, serves as simplified alternative data processing strategy to double spike isotope dilution calculations. The outstanding advantage of this mathematical tool lies in the possibility of deconvolving the isotope pattern in a spiked sample without knowing the quantities of enriched isotope tracer being incorporated into the natural sample matrix as well as the degree of impurities and species-interconversion (e.g. from sample preparation). Here, the potential of IPD for environmental tracer

  12. Matrix Factorization and Matrix Concentration

    OpenAIRE

    Mackey, Lester

    2012-01-01

    Motivated by the constrained factorization problems of sparse principal components analysis (PCA) for gene expression modeling, low-rank matrix completion for recommender systems, and robust matrix factorization for video surveillance, this dissertation explores the modeling, methodology, and theory of matrix factorization.We begin by exposing the theoretical and empirical shortcomings of standard deflation techniques for sparse PCA and developing alternative methodology more suitable for def...

  13. Singular-value decomposition analysis of source illumination in seismic interferometry by multidimensional deconvolution

    NARCIS (Netherlands)

    Minato, S.; Matsuoka, T.; Tsuji, T.

    2013-01-01

    We have developed a method to analytically evaluate the relationship between the source-receiver configuration and the retrieved wavefield in seismic interferometry performed by multidimensional deconvolution (MDD). The MDD method retrieves the wavefield with the desired source-receiver configuratio

  14. [Study on identification of Gastrodia elata Bl. by Fourier self-deconvolution infrared spectroscopy].

    Science.gov (United States)

    Cheng, Ze-Feng; Xu, Rui; Cheng, Cun-Gui

    2007-09-01

    In the present article the FTIR spectra of the wild and planting Gastrodia elata Bl. from different habitats and its confusable varieties such as Canna edulis Ker-Gawl, Colocasia esculenta (L.) Schott and Solanum tuberosum L. were obtained by horizontal attenuated total reflection infrared spectroscopy (HATR-FTIR), and were all transformed by Fourier self-deconvolution. The authors investigated the discrepancy extent of Fourier self-deconvolution of Gastrodia elata Bl and confusable varieties under various bandwidth and enhancement, and found that the discrepancy extent of Gastrodia elata Bl and confusable varieties was the most obvious when the bandwidth was between 75.0 and 76.0 and enhancement was 3.2. By adopting Fourier self-deconvolution infrared spectroscopy (FSD-IR) analytical method the samples were studied in detail. The results showed that we could find out the difference among them by means of Fourier self-deconvolution infrared spectroscopy, although it was very difficult to find out the difference in FSD-IR spectra of wild and planting Gastrodia elata Bl., and asexual reproduction and sexual reproduction Gastrodia elata Bl. The difference in FSD-IR spectra between Gastrodia elata Bl. and its confusable varieties is also very great. Therefore, this method can be used to recognize different Gastrodia elata Bl. and its confusable varieties simply, rapidly and accurately.

  15. Frequency-Difference Source Localization and Blind Deconvolution in Shallow Ocean Environments

    Science.gov (United States)

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Frequency-Difference Source Localization and Blind ... blind deconvolution technique to dynamic multipath environments, and (ii) determining the utility of the frequency difference concept within matched...successful, the STR work might make underwater acoustic communications more efficient and reliable since sound-channel calibration would not be

  16. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    Science.gov (United States)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  17. Increasing the accuracy of radiotracer monitoring in one-dimensional flow using polynomial deconvolution correction.

    Science.gov (United States)

    Gholipour Peyvandi, Reza; Taheri, Ali

    2016-01-01

    Factors such as type of fluid movement and gamma-ray scattering may decrease the precision of the radiotracer monitoring as the response to a short tracer injection. Practical experiences using polynomial deconvolution techniques are presented. These techniques were successfully applied for correcting the obtained experimental results and increasing the time resolution of the method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Source signature processing in deep water, Gulf of Mexico: comparison between deterministic deconvolution and phase conjugation

    Directory of Open Access Journals (Sweden)

    C. R. Partouche

    2000-06-01

    Full Text Available The Center for Marine Resources and Environmental Technology has been developing a new method to improve the resolution of high-resolution seismic profiling. To achieve this the source signature is recorded and the reflected data are sampled at a very high rate. In addition a certain amount of post processing is performed. During September 1999 a series of seismic profiles were acquired in the Gulf of Mexico using a 15 in³ watergun towed at the surface and a short single-channel hydrophone array towed about 250 m below the surface. The profiles were digitized at a rate of 80 000 samples per second; the length of each record was 4 s. Two different processes were applied to the data: deterministic deconvolution and phase conjugation. Both have the effect of compressing each reflected wavelet into a short pulse that is symmetrical about a central lobe. The ratio of compression obtained by applying deterministic deconvolution on the source signature pulse was about 300; it was about 160 when applying phase conjugation. This produced a resolution of about 6 cm by the deconvolution process and about 10 cm by using phase conjugation. The deconvolution process however is more subject to noise so the better result in this experiment was found to be provided by phase conjugation.

  19. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    Science.gov (United States)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  20. Improving the precision of fMRI BOLD signal deconvolution with implications for connectivity analysis.

    Science.gov (United States)

    Bush, Keith; Cisler, Josh; Bian, Jiang; Hazaroglu, Gokce; Hazaroglu, Onder; Kilts, Clint

    2015-12-01

    An important, open problem in neuroimaging analyses is developing analytical methods that ensure precise inferences about neural activity underlying fMRI BOLD signal despite the known presence of confounds. Here, we develop and test a new meta-algorithm for conducting semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that estimates, via bootstrapping, both the underlying neural events driving BOLD as well as the confidence of these estimates. Our approach includes two improvements over the current best performing deconvolution approach; 1) we optimize the parametric form of the deconvolution feature space; and, 2) we pre-classify neural event estimates into two subgroups, either known or unknown, based on the confidence of the estimates prior to conducting neural event classification. This knows-what-it-knows approach significantly improves neural event classification over the current best performing algorithm, as tested in a detailed computer simulation of highly-confounded fMRI BOLD signal. We then implemented a massively parallelized version of the bootstrapping-based deconvolution algorithm and executed it on a high-performance computer to conduct large scale (i.e., voxelwise) estimation of the neural events for a group of 17 human subjects. We show that by restricting the computation of inter-regional correlation to include only those neural events estimated with high-confidence the method appeared to have higher sensitivity for identifying the default mode network compared to a standard BOLD signal correlation analysis when compared across subjects.

  1. Least 1-Norm Pole-Zero Modeling with Sparse Deconvolution for Speech Analysis

    DEFF Research Database (Denmark)

    Shi, Liming; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    . Moreover, to consider the spiky excitation form of the pulse train during voiced speech, the modeling parame- ters and sparse residuals are estimated in an iterative fashion using a least 1-norm pole-zero with sparse deconvolution algorithm. Com- pared with the conventional two-stage least squares pole...

  2. Spatial deconvolution of spectropolarimetric data: an application to quiet Sun magnetic elements

    Science.gov (United States)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suárez, D.; Ruiz Cobo, B.

    2015-07-01

    Context. One of the difficulties in extracting reliable information about the thermodynamical and magnetic properties of solar plasmas from spectropolarimetric observations is the presence of light dispersed inside the instruments, known as stray light. Aims: We aim to analyze quiet Sun observations after the spatial deconvolution of the data. We examine the validity of the deconvolution process with noisy data as we analyze the physical properties of quiet Sun magnetic elements. Methods: We used a regularization method that decouples the Stokes inversion from the deconvolution process, so that large maps can be quickly inverted without much additional computational burden. We applied the method on Hinode quiet Sun spectropolarimetric data. We examined the spatial and polarimetric properties of the deconvolved profiles, comparing them with the original data. After that, we inverted the Stokes profiles using the Stokes Inversion based on Response functions (SIR) code, which allow us to obtain the optical depth dependence of the atmospheric physical parameters. Results: The deconvolution process increases the contrast of continuum images and makes the magnetic structures sharper. The deconvolved Stokes I profiles reveal the presence of the Zeeman splitting while the Stokes V profiles significantly change their amplitude. The area and amplitude asymmetries of these profiles increase in absolute value after the deconvolution process. We inverted the original Stokes profiles from a magnetic element and found that the magnetic field intensity reproduces the overall behavior of theoretical magnetic flux tubes, that is, the magnetic field lines are vertical in the center of the structure and start to fan when we move far away from the center of the magnetic element. The magnetic field vector inferred from the deconvolved Stokes profiles also mimic a magnetic flux tube but in this case we found stronger field strengths and the gradients along the line-of-sight are larger

  3. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, C; Jin, M [University of Texas at Arlington, Arlington, TX (United States); Ouyang, L; Wang, J [UT Southwestern Medical Center at Dallas, Dallas, TX (United States)

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used to recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation

  4. Self-Constrained Euler Deconvolution Using Potential Field Data of Different Altitudes

    Science.gov (United States)

    Zhou, Wenna; Nan, Zeyu; Li, Jiyan

    2016-06-01

    Euler deconvolution has been developed as almost the most common tool in potential field data semi-automatic interpretation. The structural index (SI) is a main determining factor of the quality of depth estimation. In this paper, we first present an improved Euler deconvolution method to eliminate the influence of SI using potential field data of different altitudes. The different altitudes data can be obtained by the upward continuation or can be directly obtained by the airborne measurement realization. Euler deconvolution at different altitudes of a certain range has very similar calculation equation. Therefore, the ratio of Euler equations of two different altitudes can be calculated to discard the SI. Thus, the depth and location of geologic source can be directly calculated using the improved Euler deconvolution without any prior information. Particularly, the noise influence can be decreased using the upward continuation of different altitudes. The new method is called self-constrained Euler deconvolution (SED). Subsequently, based on the SED algorithm, we deduce the full tensor gradient (FTG) calculation form of the new improved method. As we all know, using multi-components data of FTG have added advantages in data interpretation. The FTG form is composed by x-, y- and z-directional components. Due to the using more components, the FTG form can get more accurate results and more information in detail. The proposed modification method is tested using different synthetic models, and the satisfactory results are obtained. Finally, we applied the new approach to Bishop model magnetic data and real gravity data. All the results demonstrate that the new approach is utility tool to interpret the potential field and full tensor gradient data.

  5. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  6. Restoring Detailed Geomagnetic and Environmental Information from Continuous Sediment Paleomagnetic Measurement through Optimised Deconvolution

    Science.gov (United States)

    Xuan, C.; Oda, H.

    2013-12-01

    The development of pass-through cryogenic magnetometers has greatly improved our efficiency in collecting paleomagnetic and rock magnetic data from continuous samples such as sediment half-core sections and u-channels. During a pass-through measurement, the magnetometer sensor response inevitably convolves with remanence of the continuous sample. The convolution process results in smoothed measurement and can seriously distort the paleomagnetic signal due to differences in sensor response along different measurement axes. Previous studies have demonstrated that deconvolution can effectively overcome the convolution effect of sensor response and improve the resolution for continuous paleomagnetic data. However, the lack of an easy-to-use deconvolution tool and the difficulty in accurately measuring the magnetometer sensor response have greatly hindered the application of deconvolution. Here, we acquire reliable estimate of sensor response of a pass-through cryogenic magnetometer at the Oregon State University by integrating repeated measurements of a magnetic point source. The point source is fixed in the center of a well-shaped polycarbonate cube with 5 mm edge length, and measured at every 1 mm position along a 40-cm interval while placing the polycarbonate cube at each of the 5 × 5 grid positions over a 2 × 2 cm2 area on the cross section. The acquired sensor response reveals that cross terms (i.e. response of pick-up coil for one axis to magnetic signal along other axes) that were often omitted in previous deconvolution practices are clearly not negligible. Utilizing the detailed estimate of magnetometer sensor response, we present UDECON, a graphical tool for convenient application of optimised deconvolution based on Akaike's Bayesian Information Criterion (ABIC) minimization (Oda and Shibuya, 1996). UDECON directly reads a paleomagnetic measurement file, and allows user to view, compare, and save data before and after deconvolution. Optimised deconvolution

  7. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    Science.gov (United States)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition

  8. Comparison between the deconvolution and maximum slope 64-MDCT perfusion analysis of the esophageal cancer: Is conversion possible?

    Energy Technology Data Exchange (ETDEWEB)

    Djuric-Stefanovic, A., E-mail: avstefan@eunet.rs [Unit of Digestive Radiology (First Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Saranovic, Dj., E-mail: crvzve4@gmail.com [Unit of Digestive Radiology (First Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Masulovic, D., E-mail: draganmasulovic@yahoo.com [Unit of Digestive Radiology (First Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Ivanovic, A., E-mail: flydoc@eunet.rs [Unit of Digestive Radiology (First Surgical Clinic), Center of Radiology and MR, Clinical Center of Serbia, Belgrade (Serbia); Faculty of Medicine, University of Belgrade, Belgrade (Serbia); Pesko, P., E-mail: predragpesko@yahoo.com [Clinic of Digestive Surgery (First Surgical Clinic), Clinical Center of Serbia, Belgrade (Serbia); Faculty of Medicine, University of Belgrade, Belgrade (Serbia)

    2013-10-01

    Purpose: To estimate if CT perfusion parameter values of the esophageal cancer, which were obtained with the deconvolution-based software and maximum slope algorithm are in agreement, or at least interchangeable. Methods: 278 esophageal tumor ROIs, derived from 35 CT perfusion studies that were performed with a 64-MDCT, were analyzed. “Slice-by-slice” and average “whole-covered-tumor-volume” analysis was performed. Tumor blood flow and blood volume were manually calculated from the arterial tumor-time–density graphs, according to the maximum slope methodology (BF{sub ms} and BV{sub ms}), and compared with the corresponding perfusion values, which were automatically computed by commercial deconvolution-based software (BF{sub deconvolution} and BV{sub deconvolution}), for the same tumor ROIs. Statistical analysis was performed using Wilcoxon matched-pairs test, paired-samples t-test, Spearman and Pearson correlation coefficients, and Bland–Altman agreement plots. Results: BF{sub deconvolution} (median: 74.75 ml/min/100 g, range, 18.00–230.5) significantly exceeded the BF{sub ms} (25.39 ml/min/100 g, range, 7.13–96.41) (Z = −14.390, p < 0.001), while BV{sub deconvolution} (median: 5.70 ml/100 g, range: 2.10–15.90) descended the BV{sub ms} (9.37 ml/100 g, range: 3.44–19.40) (Z = −13.868, p < 0.001). Both pairs of perfusion measurements significantly correlated with each other: BF{sub deconvolution}, versus BF{sub ms} (r{sub S} = 0.585, p < 0.001), and BV{sub deconvolution}, versus BV{sub ms} (r{sub S} = 0.602, p < 0.001). Geometric mean BF{sub deconvolution}/BF{sub ms} ratio was 2.8 (range, 1.1–6.8), while geometric mean BV{sub deconvolution}/BV{sub ms} ratio was 0.6 (range, 0.3–1.1), within 95% limits of agreement. Conclusions: Significantly different CT perfusion values of the esophageal cancer blood flow and blood volume were obtained by deconvolution-based and maximum slope-based algorithms, although they correlated significantly with

  9. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  10. Deconvoluting nonaxial recoil in Coulomb explosion measurements of molecular axis alignment

    Science.gov (United States)

    Christensen, Lauge; Christiansen, Lars; Shepperson, Benjamin; Stapelfeldt, Henrik

    2016-08-01

    We report a quantitative study of the effect of nonaxial recoil during Coulomb explosion of laser-aligned molecules and introduce a method to remove the blurring caused by nonaxial recoil in the fragment-ion angular distributions. Simulations show that nonaxial recoil affects correlations between the emission directions of fragment ions differently from the effect caused by imperfect molecular alignment. The method, based on analysis of the correlation between the emission directions of the fragment ions from Coulomb explosion, is used to deconvolute the effect of nonaxial recoil from experimental fragment angular distributions. The deconvolution method is then applied to a number of experimental data sets to correct the degree of alignment for nonaxial recoil, to select optimal Coulomb explosion channels for probing molecular alignment, and to estimate the highest degree of alignment that can be observed from selected Coulomb explosion channels.

  11. Deconvoluting contributions of photoexcited species in polymer-quantum dot hybrid photovoltaic materials

    Science.gov (United States)

    Couderc, Elsa; Greaney, Matthew J.; Thornbury, William; Brutchey, Richard L.; Bradforth, Stephen E.

    2015-01-01

    Ultrafast transient absorption spectroscopy is used in conjunction with spectroelectrochemistry and chemical doping experiments to study the photogeneration of charges in hybrid bulk heterojunction (BHJ) thin films composed of poly[2,6-(4,4-bis(2-ethylhexyl)-4H-cyclopenta[2,1-b:3,4-b‧]-dithiophene)-alt-4,7-(2,1,3-benzothiadiazole)] (PCPDTBT) and CdSe nanocrystals. Chemical doping experiments on hybrid and neat PCPDTBT:CdSe thin films are used to deconvolute the spectral signatures of the transient states in the near infrared. We confirm the formation and assignment of oxidized species in chemical doping experiments by comparing the spectral data to that from spectroelectrochemical measurements on hybrid and neat PCPDTBT:CdSe BHJ thin films. The deconvolution procedure allows extraction of the polaron populations in the neat polymer and hybrid thin films.

  12. Correcting direction-dependent gains in the deconvolution of radio interferometric images

    CERN Document Server

    Bhatnagar, S; Golap, K; Uson, Juan M

    2008-01-01

    Astronomical imaging using aperture synthesis telescopes requires deconvolution of the point spread function as well as calibration of instrumental and atmospheric effects. In general, such effects are time-variable and vary across the field of view as well, resulting in direction-dependent (DD), time-varying gains. Most existing imaging and calibration algorithms assume that the corruptions are direction independent, preventing even moderate dynamic range full-beam, full-Stokes imaging. We present a general framework for imaging algorithms which incorporate DD errors. We describe as well an iterative deconvolution algorithm that corrects known DD errors due to the antenna power patterns and pointing errors for high dynamic range full-beam polarimetric imaging. Using simulations we demonstrate that errors due to realistic primary beams as well as antenna pointing errors will limit the dynamic range of upcoming higher sensitivity instruments and that our new algorithm can be used to correct for such errors. We...

  13. Sparse blind deconvolution of seismic data via spectral projected-gradient

    CERN Document Server

    Liu, Entao; McClellan, James H; Al-Shuhail, Abdullatif A

    2016-01-01

    We present an efficient numerical scheme for seismic blind deconvolution in a multichannel scenario. The method is iterative with two steps: wavelet estimation across all channels and refinement of the reflectivity estimate simultaneously in all channels using sparse deconvolution. The reflectivity update step is formulated as a basis pursuit denoising problem that is solved with the spectral projected-gradient algorithm which is known to be the fastest computational method for obtaining the sparse solution of this problem. Wavelet re-estimation has a closed form solution when performed in the frequency domain by finding the minimum energy wavelet common to all channels. In tests with both synthetic and real data, this new method yields better quality results with significantly less computational effort (more than two orders of magnitude faster) when compared to existing methods.

  14. Stray-light contamination and spatial deconvolution of slit-spectrograph observations

    CERN Document Server

    Beck, C; Fabbian, D

    2011-01-01

    Stray light caused by scattering on optical surfaces and in the Earth's atmosphere degrades the spatial resolution of observations. We study the contribution of stray light to the two channels of POLIS. We test the performance of different methods of stray-light correction and spatial deconvolution to improve the spatial resolution post-facto. We model the stray light as having two components: a spectrally dispersed component and a component of parasitic light caused by scattering inside the spectrograph. We use several measurements to estimate the two contributions: observations with a (partly) blocked FOV, a convolution of the FTS spectral atlas, imaging in the pupil plane, umbral profiles, and spurious polarization signal in telluric lines. The measurements allow us to estimate the spatial PSF of POLIS and the main spectrograph of the German VTT. We use the PSF for a deconvolution of both spectropolarimetric data and investigate the effect on the spectra. The parasitic contribution can be directly and accu...

  15. Holographic time-resolved particle tracking by means of three-dimensional volumetric deconvolution

    CERN Document Server

    Latychevskaia, Tatiana

    2014-01-01

    Holographic particle image velocimetry allows tracking particle trajectories in time and space by means of holography. However, the drawback of the technique is that in the three-dimensional particle distribution reconstructed from a hologram, the individual particles can hardly be resolved due to the superimposed out-of-focus signal from neighboring particles. We demonstrate here a three-dimensional volumetric deconvolution applied to the reconstructed wavefront which results in resolving all particles simultaneously in three-dimensions. Moreover, we apply the three-dimensional volumetric deconvolution to reconstructions of a time-dependent sequence of holograms of an ensemble of polystyrene spheres moving in water. From each hologram we simultaneously resolve all particles in the ensemble in three dimensions and from the sequence of holograms we obtain the time-resolved trajectories of individual polystyrene spheres.

  16. Image Deconvolution Under Poisson Noise Using Sparse Representations and Proximal Thresholding Iteration

    CERN Document Server

    Dupé, François-Xavier; Starck, Jean Luc

    2008-01-01

    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transform. Our key innovations are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a non-linear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. l1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of...

  17. Probabilistic blind deconvolution of non-stationary sources

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals, the associa......We solve a class of blind signal separation problems using a constrained linear Gaussian model. The observed signal is modelled by a convolutive mixture of colored noise signals with additive white noise. We derive a time-domain EM algorithm `KaBSS' which estimates the source signals......, the associated second-order statistics, the mixing filters and the observation noise covariance matrix. KaBSS invokes the Kalman smoother in the E-step to infer the posterior probability of the sources, and one-step lower bound optimization of the mixing filters and noise covariance in the M-step. In line...... with (Parra and Spence, 2000) the source signals are assumed time variant in order to constrain the solution sufficiently. Experimental results are shown for mixtures of speech signals....

  18. Matrix Thermalization

    CERN Document Server

    Craps, Ben; Nguyen, Kévin

    2016-01-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  19. Activation foils unfolding for neutron spectrometry: Comparison of different deconvolution methods

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, S.P. [Radiation Safety Systems Division, BARC, Mumbai 400085 (India)], E-mail: sam.tripathy@gmail.com; Sunil, C. [Radiation Safety Systems Division, BARC, Mumbai 400085 (India); Nandy, M. [Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700064 (India); Sarkar, P.K. [Radiation Safety Systems Division, BARC, Mumbai 400085 (India); Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700064 (India); Sharma, D.N. [Radiation Safety Systems Division, BARC, Mumbai 400085 (India); Mukherjee, B. [Deutsches Elektronen-Synchrotron, LLRF Group, D-22607 Hamburg (Germany)

    2007-12-21

    The results obtained from the activation foils measurement are unfolded using two different deconvolution methods such as BUNKI and genetic algorithm (GA). The spectra produced by these codes agree fairly with each other and are comparable with that measured previously for the same system using NE213 liquid scintillator and by unfolding the neutron-induced proton pulse height distribution using two different methods, viz. FERDOR and BUNKI. The details of various unfolding procedures used in this work are reported in this paper.

  20. Doppler broadening effect on collision cross section functions - Deconvolution of the thermal averaging

    Science.gov (United States)

    Bernstein, R. B.

    1973-01-01

    The surprising feature of the Doppler problem in threshold determination is the 'amplification effect' of the target's thermal energy spread. The small thermal energy spread of the target molecules results in a large dispersion in relative kinetic energy. The Doppler broadening effect in connection with thermal energy beam experiments is discussed, and a procedure is recommended for the deconvolution of molecular scattering cross-section functions whose dominant dependence upon relative velocity is approximately that of the standard low-energy form.

  1. A dynamic subgrid-scale modeling framework for large eddy simulation using approximate deconvolution

    CERN Document Server

    Maulik, Romit

    2016-01-01

    We put forth a dynamic modeling framework for sub-grid parametrization of large eddy simulation of turbulent flows based upon the use of the approximate deconvolution procedure to compute the Smagorinsky constant self-adaptively from the resolved flow quantities. Our numerical assessments for solving the Burgers turbulence problem shows that the proposed approach could be used as a viable tool to address the turbulence closure problem due to its flexibility.

  2. Deconvolution Method for Determination of the Nitrogen Content in Cellulose Carbamates

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Cellulose carbamates (CC) were synthesized with microcrystalline cellulose as raw materials. The Fourier transform infrared spectra of CC with different nitrogen content were recorded. The accurate results of the nitrogen content for CC can be obtained by using the deconvolution method when the nitrogen content is less than 3.5%. The relationship between the nitrogen content and the absorption intensity ratio of the corresponding separated absorption peaks in FTIR spectra has been expressed by an equation precisely.

  3. The Moon: Determining Minerals and their Abundances with Mid-IR Spectral Deconvolution II

    Science.gov (United States)

    Kozlowski, Richard W.; Donaldson Hanna, K.; Sprague, A. L.; Grosse, F. A.; Boop, T. S.; Warell, J.; Boccafola, K.

    2007-10-01

    We determine the mineral compositions and abundances at three locations on the lunar surface using an established spectral deconvolution algorithm (Ramsey 1996, Ph.D. Dissertation, ASU; Ramsey and Christiansen 1998, JGR 103, 577-596) for mid-infrared spectral libraries of mineral separates of varying grain sizes. Spectral measurements of the lunar surface were obtained at the Infrared Telescope Facility (IRTF) on Mauna Kea, HI with Boston University's Mid-Infrared Spectrometer and Imager (MIRSI). Our chosen locations, Aristarchus, Grimaldi and Mersenius C, have been previously observed in the VIS near-IR from ground-based telescopes and spacecraft (Zisk et al. 1977, The Moon 17, 59-99; Hawke et al. 1993, GRL 20, 419-422; McEwen et al. 1994, Science 266, 1858-1862; Peterson et al. 1995, 22, 3055-3058; Warell et al. 2006, Icarus 180, 281-291), however there are no sample returns for analysis. Surface mineral deconvolutions of the Grimaldi Basin infill are suggestive of anorthosite, labradorite, orthopyroxene, olivine, garnet and phosphate. Peterson et al. (1995) indicated the infill of Grimaldi Basin has a noritic anorthosite or anorthositic norite composition. Our spectral deconvolution supports these results. Modeling of other lunar locations is underway. We have also successfully modeled laboratory spectra of HED meteorites, Vesta, and Mercury (see meteorites and mercurian abstracts this meeting). These results demonstrate the spectral deconvolution method to be robust for making mineral identifications on remotely observed objects, in particular main-belt asteroids, the Moon, and Mercury. This work was funded by NSF AST406796.

  4. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions.

  5. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  6. Analysis and deconvolution of dimethylnaphthalene isomers using gas chromatography vacuum ultraviolet spectroscopy and theoretical computations.

    Science.gov (United States)

    Schenk, Jamie; Mao, James X; Smuts, Jonathan; Walsh, Phillip; Kroll, Peter; Schug, Kevin A

    2016-11-16

    An issue with most gas chromatographic detectors is their inability to deconvolve coeluting isomers. Dimethylnaphthalenes are a class of compounds that can be particularly difficult to speciate by gas chromatography - mass spectrometry analysis, because of their significant coelution and similar mass spectra. As an alternative, a vacuum ultraviolet spectroscopic detector paired with gas chromatography was used to study the systematic deconvolution of mixtures of coeluting isomers of dimethylnaphthalenes. Various ratio combinations of 75:25; 50:50; 25:75; 20:80; 10:90; 5:95; and 1:99 were prepared to test the accuracy, precision, and sensitivity of the detector for distinguishing overlapping isomers that had distinct, but very similar absorption spectra. It was found that, under reasonable injection conditions, all of the pairwise overlapping isomers tested could be deconvoluted up to nearly two orders of magnitude (up to 99:1) in relative abundance. These experimental deconvolution values were in agreement with theoretical covariance calculations performed for two of the dimethylnaphthalene isomers. Covariance calculations estimated high picogram detection limits for a minor isomer coeluting with low to mid-nanogram quantity of a more abundant isomer. Further characterization of the analytes was performed using density functional theory computations to compare theory with experimental measurements. Additionally, gas chromatography - vacuum ultraviolet spectroscopy was shown to be able to speciate dimethylnaphthalenes in jet and diesel fuel samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    Directory of Open Access Journals (Sweden)

    Yuebo Zha

    2015-03-01

    Full Text Available Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm.

  8. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  9. Perfusion deconvolution in DSC-MRI with dispersion-compliant bases.

    Science.gov (United States)

    Pizzolato, Marco; Boutelier, Timothé; Deriche, Rachid

    2017-02-01

    Perfusion imaging of the brain via Dynamic Susceptibility Contrast MRI (DSC-MRI) allows tissue perfusion characterization by recovering the tissue impulse response function and scalar parameters such as the cerebral blood flow (CBF), blood volume (CBV), and mean transit time (MTT). However, the presence of bolus dispersion causes the data to reflect macrovascular properties, in addition to tissue perfusion. In this case, when performing deconvolution of the measured arterial and tissue concentration time-curves it is only possible to recover the effective, i.e. dispersed, response function and parameters. We introduce Dispersion-Compliant Bases (DCB) to represent the response function in the presence and absence of dispersion. We perform in silico and in vivo experiments, and show that DCB deconvolution outperforms oSVD and the state-of-the-art CPI+VTF techniques in the estimation of effective perfusion parameters, regardless of the presence and amount of dispersion. We also show that DCB deconvolution can be used as a pre-processing step to improve the estimation of dispersion-free parameters computed with CPI+VTF, which employs a model of the vascular transport function to characterize dispersion. Indeed, in silico results show a reduction of relative errors up to 50% for dispersion-free CBF and MTT. Moreover, the DCB method recovers effective response functions that comply with healthy and pathological scenarios, and offers the advantage of making no assumptions about the presence, amount, and nature of dispersion. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. An overview of computer algorithms for deconvolution-based assessment of in vivo neuroendocrine secretory events.

    Science.gov (United States)

    Veldhuis, J D; Johnson, M L

    1990-06-01

    The availability of increasingly efficient computational systems has made feasible the otherwise burdensome analysis of complex neurobiological data, such as in vivo neuroendocrine glandular secretory activity. Neuroendocrine data sets are typically sparse, noisy and generated by combined processes (such as secretion and metabolic clearance) operating simultaneously over both short and long time spans. The concept of a convolution integral to describe the impact of two or more processes acting jointly has offered an informative mathematical construct with which to dissect (deconvolve) specific quantitative features of in vivo neuroendocrine phenomena. Appropriate computer-based deconvolution algorithms are capable of solving families of 100-300 simultaneous integral equations for a large number of secretion and/or clearance parameters of interest. For example, one application of computer technology allows investigators to deconvolve the number, amplitude and duration of statistically significant underlying secretory episodes of algebraically specifiable waveform and simultaneously estimate subject- and condition-specific neurohormone metabolic clearance rates using all observed data and their experimental variances considered simultaneously. Here, we will provide a definition of selected deconvolution techniques, review their conceptual basis, illustrate their applicability to biological data and discuss new perspectives in the arena of computer-based deconvolution methodologies for evaluating complex biological events.

  11. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  12. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  13. Bayesian deconvolution for angular super-resolution in forward-looking scanning radar.

    Science.gov (United States)

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-03-23

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson-Lucy algorithm.

  14. Ptychographic inversion via Wigner distribution deconvolution: Noise suppression and probe design

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peng, E-mail: elp12pl@sheffield.ac.uk; Edo, Tega B.; Rodenburg, John M.

    2014-12-15

    We reconsider the closed form solution of the ptychographic phase problem called the Wigner Distribution Deconvolution Method (WDDM), which has remained discarded for twenty years. Ptychographic reconstruction is nowadays always undertaken by iterative algorithms. WDDM gives rise to a 4 dimensional data cube of all the relative phases between points in the diffraction plane. Here we demonstrate a novel method to use all this information, instead of just the small subset used in the original ‘stepping out’ procedure developed in the 1990s, thus greatly suppressing noise. We further develop a method for designing an improved probe (illumination function) to further decrease noise effects during the deconvolution division. Combining these two with an iterative procedure for the deconvolution, which avoids the usual difficulty of a divide by a small number, we show in model calculations that WDDM competes well with the modern conventional iterative methods like ePIE (the extended Ptychographical Iterative Engine). - Highlights: • We rehearse the derivation of WDDM and put forward its implementation conditions. • We propose a projection strategy to exploit all the phase information. • We define the optimised probe for WDDM and report a method to design the probe. • We put forward an iterative noise suppression method to enhance the performance. • All these improvements have been successfully demonstrated via simulated results.

  15. Experimental investigation of the perfusion of the liver with non-diffusible tracers: Differentiation of the arterial and portal-venous components by deconvolution analysis of first-pass time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, Z.; Torsello, G.; Reifenrath, C.; Porschen, R.; Vosberg, H.

    1988-10-01

    The transfer function of the liver perfusion is an idealised time-activity curve that could be registered over the liver if a non-diffusible tracer would be injected directly into the abdominal aorta and no tracer recirculation would occur. The reproducibility of the transfer function was experimentally investigated in foxhounds. Both the routes of tracer application and the modes of data evaluation were varied and the perfusion was investigated under physiological and pathological conditions. The transfer function was calculated by deconvolution analysis of first-pass time-activity curves using the matrix regularisation method. The transfer function showed clearly distinguishable arterial and portal-venous components. Repeated peripheral venous and central aortic applications resulted in reproducible curves. In addition to the arterial and portal-venous components the subcomponents of the portal-venous component could also be identified in the transfer function after ligation of the appropriate vessels. The accuracy of the mathematical procedure was tested by computer simulations. The simulation studies demonstrated also that the matrix regularisation technique is suitable for deconvolution analysis of time-activity curves even when they are significantly contaminated by statistical noise. Calculation of the transfer function of liver perfusion and of its quantitative parameters seems thus to be a reliable method for non-invasive investigation of liver hemodynamics under physiological and pathological conditions.

  16. Delay-sensitive and delay-insensitive deconvolution perfusion-CT: similar ischemic core and penumbra volumes if appropriate threshold selected for each

    Energy Technology Data Exchange (ETDEWEB)

    Man, Fengyuan [Capital Medical University, Department of Radiology, Beijing Tongren Hospital, Beijing (China); University of Virginia, Department of Radiology, Neuroradiology Division, Charlottesville, VA (United States); Patrie, James T.; Xin, Wenjun [University of Virginia, Department of Public Health Sciences, Charlottesville, VA (United States); Zhu, Guangming [University of Virginia, Department of Radiology, Neuroradiology Division, Charlottesville, VA (United States); Military General Hospital of Beijing PLA, Department of Neurology, Beijing (China); Hou, Qinghua [University of Virginia, Department of Radiology, Neuroradiology Division, Charlottesville, VA (United States); The Second Affiliated Hospital of Guangzhou Medical University, Department of Neurology, Guangzhou (China); Michel, Patrik; Eskandari, Ashraf [Centre Hospitalier Universitaire Vaudois, Department of Neurology, Lausanne (Switzerland); Jovin, Tudor [University of Pittsburgh, Department of Neurology, Pittsburgh, PA (United States); Xian, Junfang; Wang, Zhenchang [Capital Medical University, Department of Radiology, Beijing Tongren Hospital, Beijing (China); Wintermark, Max [University of Virginia, Department of Radiology, Neuroradiology Division, Charlottesville, VA (United States); Centre Hospitalier Universitaire Vaudois, Department of Radiology, Lausanne (Switzerland); Stanford University, Department of Radiology, Neuroradiology Division, Stanford, CA (United States)

    2015-03-07

    Perfusion-CT (PCT) processing involves deconvolution, a mathematical operation that computes the perfusion parameters from the PCT time density curves and an arterial curve. Delay-sensitive deconvolution does not correct for arrival delay of contrast, whereas delay-insensitive deconvolution does. The goal of this study was to compare delay-sensitive and delay-insensitive deconvolution PCT in terms of delineation of the ischemic core and penumbra. We retrospectively identified 100 patients with acute ischemic stroke who underwent admission PCT and CT angiography (CTA), a follow-up vascular study to determine recanalization status, and a follow-up noncontrast head CT (NCT) or MRI to calculate final infarct volume. PCT datasets were processed twice, once using delay-sensitive deconvolution and once using delay-insensitive deconvolution. Regions of interest (ROIs) were drawn, and cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) in these ROIs were recorded and compared. Volume and geographic distribution of ischemic core and penumbra using both deconvolution methods were also recorded and compared. MTT and CBF values are affected by the deconvolution method used (p < 0.05), while CBV values remain unchanged. Optimal thresholds to delineate ischemic core and penumbra are different for delay-sensitive (145 % MTT, CBV 2 ml x 100 g{sup -1} x min{sup -1}) and delay-insensitive deconvolution (135 % MTT, CBV 2 ml x 100 g{sup -1} x min{sup -1} for delay-insensitive deconvolution). When applying these different thresholds, however, the predicted ischemic core (p = 0.366) and penumbra (p = 0.405) were similar with both methods. Both delay-sensitive and delay-insensitive deconvolution methods are appropriate for PCT processing in acute ischemic stroke patients. The predicted ischemic core and penumbra are similar with both methods when using different sets of thresholds, specific for each deconvolution method. (orig.)

  17. Matrix inequalities

    CERN Document Server

    Zhan, Xingzhi

    2002-01-01

    The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.

  18. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux , Bordeaux 33 33076 (France); Visvikis, Dimitris [INSERM, UMR1101, LaTIM, Université de Bretagne Occidentale, Brest 29 29609 (France); Fernandez, Philippe; Lamare, Frederic [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux, Bordeaux 33 33076 (France)

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  19. Incorporation of wavelet-based denoising in iterative deconvolution for partial volume correction in whole-body PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Boussion, N.; Cheze Le Rest, C.; Hatt, M.; Visvikis, D. [INSERM, U650, Laboratoire de Traitement de l' Information Medicale (LaTIM) CHU MORVAN, Brest (France)

    2009-07-15

    Partial volume effects (PVEs) are consequences of the limited resolution of emission tomography. The aim of the present study was to compare two new voxel-wise PVE correction algorithms based on deconvolution and wavelet-based denoising. Deconvolution was performed using the Lucy-Richardson and the Van-Cittert algorithms. Both of these methods were tested using simulated and real FDG PET images. Wavelet-based denoising was incorporated into the process in order to eliminate the noise observed in classical deconvolution methods. Both deconvolution approaches led to significant intensity recovery, but the Van-Cittert algorithm provided images of inferior qualitative appearance. Furthermore, this method added massive levels of noise, even with the associated use of wavelet-denoising. On the other hand, the Lucy-Richardson algorithm combined with the same denoising process gave the best compromise between intensity recovery, noise attenuation and qualitative aspect of the images. The appropriate combination of deconvolution and wavelet-based denoising is an efficient method for reducing PVEs in emission tomography. (orig.)

  20. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    Science.gov (United States)

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  1. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Science.gov (United States)

    González, Adriana; Delouille, Véronique; Jacques, Laurent

    2016-01-01

    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  2. De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska

    Science.gov (United States)

    Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.

    2008-01-01

    Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.

  3. Algorithm for transient response of whole body indirect calorimeter: deconvolution with a regularization parameter.

    Science.gov (United States)

    Tokuyama, Kumpei; Ogata, Hitomi; Katayose, Yasuko; Satoh, Makoto

    2009-02-01

    A whole body indirect calorimeter provides accurate measurement of energy expenditure over long periods of time, but it has limitations to assess its dynamic changes. The present study aimed to improve algorithms to compute O(2) consumption and CO(2) production by adopting a stochastic deconvolution method, which controls the relative weight of fidelity to the data and smoothness of the estimates. The performance of the new algorithm was compared with that of other algorithms (moving average, trends identification, Kalman filter, and Kalman smoothing) against validation tests in which energy metabolism was evaluated every 1 min. First, an in silico simulation study, rectangular or sinusoidal inputs of gradually decreasing periods (64, 32, 16, and 8 min) were applied, and samples collected from the output were corrupted with superimposed noise. Second, CO(2) was infused into a chamber in gradually decreasing intervals and the CO(2) production rate was estimated by algorithms. In terms of recovery, mean square error, and correlation to the known input signal in the validation tests, deconvolution performed better than the other algorithms. Finally, as a case study, the time course of energy metabolism during sleep, the stages of which were assessed by a standard polysomnogram, was measured in a whole body indirect calorimeter. Analysis of covariance revealed an association of energy expenditure with sleep stage, and energy expenditure computed by deconvolution and Kalman smoothing was more closely associated with sleep stages than that based on trends identification and the Kalman filter. The new algorithm significantly improved the transient response of the whole body indirect calorimeter.

  4. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  5. Matrix analysis

    CERN Document Server

    Bhatia, Rajendra

    1997-01-01

    A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...

  6. Deconvolution of acoustic emissions for source localization using time reverse modeling

    Science.gov (United States)

    Kocur, Georg Karl

    2017-01-01

    Impact experiments on small-scale slabs made of concrete and aluminum were carried out. Wave motion radiated from the epicenter of the impact was recorded as voltage signals by resonant piezoelectric transducers. Numerical simulations of the elastic wave propagation are performed to simulate the physical experiments. The Hertz theory of contact is applied to estimate the force impulse, which is subsequently used for the numerical simulation. Displacements at the transducer positions are calculated numerically. A deconvolution function is obtained by comparing the physical (voltage signal) and the numerical (calculated displacement) experiments. Acoustic emission signals due to pencil-lead breaks are recorded, deconvolved and applied for localization using time reverse modeling.

  7. Euldep: A program for the Euler deconvolution of magnetic and gravity data

    CSIR Research Space (South Africa)

    Durrheim, RJ

    1998-07-01

    Full Text Available . INTRODUCTION Measurements of the magnetic and gravity ?eld of the Earth are used extensively to explore its struc- ture, particularly in the search for gold, oil, dia- monds and other substances of economic value. Magnetic data are frequently collected... FOR THE EULER DECONVOLUTION OF MAGNETIC AND GRAVITY DATA R. J. DURRHEIM1* and G. R. J. COOPER2 1CSIR Division of Mining Technology, P.O. Box 91230, Auckland Park 2006, South Africa, and 2Departments of Geophysics and Geology, University of the Witwatersrand...

  8. Comparison between the 1D deconvolution and the ETM scatter correction techniques in PET

    Energy Technology Data Exchange (ETDEWEB)

    Trebossen, R.; Bendrien, B.; Frouin, V. [CEA-Service Hospitalier F. Joliot, Orsay (France)] [and others

    1994-05-01

    Scatter corrections usually degrade the Signal-to-Noise Ratio (SNR) while they improve image quantification. Dual energy corrections provide scatter corrected images with a poor SNR due to the use of two sinograms having low statistics. We have evaluated the SNR on 20 cm uniform cylinder images, acquired on an ECAT 953B/31 with septa in the field-of-view, corrected for scatter using the 1D deconvolution method and an energy based correction developed at Orsay. The latter, referenced as the Estimation of True Method, uses a High Energy Window (HEW) with 550 and 850 keV settings to estimate the true component registered in the Classical Energy Window (CEW) with 250 and 850 keV settings. A sinogram of scattered events is formed from this noisy estimate of the trues. It is filtered and then subtracted from the CEW sinogram to provide a scatter free sinogram. Nine Regions of Interest (ROI) of 18mm diameter have been drawn on a 110 mm diameter circle and reported on 11 direct slices (96 million events each in the CEW and 8 million in the HEW). The SNR has been defined as the ratio of the mean over the standard deviation of all ROI values. With the 1D deconvolution the SNR is 38.0, close to that obtained without scatter correction (39.1) It is lower with the ETM depending on the filter used: with a rectangular window of 9 bins by 15 angles it is 29;.8 (26.9 with a 5 by 5 window) while with a 2D Gaussian filter (7 bins by 13 angles variances) it is 30.8. This value is higher than 22.1 measured on the HEW image. The ETM with adequate filtering allows scatter correction with a SNR acceptable compared with that measured with the 1D deconvolution. Yet the ETM has a clear advantage over the 1D deconvolution in case of asymetrical source distributions in non homogeneous media and in case of off-plane scattering as has been tested on various phantom measurements.

  9. Computerized glow curve deconvolution: the case of LiF TLD-100

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, R.K.; Dorendrajit Singh, S.; Mazumdar, P.S. (Manipur Univ. (India). Dept. of Physics)

    1993-05-14

    It has been accepted by a large number of workers that the glow curve of LiF TLD (Thermoluminescent dosimetry)-100 can be described by thermoluminescence (TL) peaks following the Randall-Wilkins (RW) equation, even though the model fails to explain a number of experimental facts. A further simplification of the model is the Podgorsak-Moran-Cameron (PMC) approximation which is also in use. This paper points out the limitation of the PMC approximation in deconvoluting glow curves of LiF TLD-100. (author).

  10. Reconstructing the insulin secretion rate by Bayesian deconvolution of phase-type densities

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    of the insulin secretion rate (ISR) can be done by solving a highly ill-posed deconvolution problem. We represent the ISR, the C-peptide concentration and the convolution kernel as scaled phase-type densities and develop a Bayesian methodology for estimating such densities via Markov chain Monte Carlo techniques....... Hereby closed form evaluation of ISR is possible. We demonstrate the methodology on experimental data from healthy subjects and obtain results which are more realistic than recently reported conclusions based upon methods where the ISR is considered as piecewise constant....

  11. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    Intrinsically fluorescent sterols, like dehydroergosterol (DHE), mimic cholesterol closely and are therefore suitable to determine cholesterol transport by fluorescence microscopy. Disadvantages of DHE are its low quantum yield, rapid bleaching, and the fact that its excitation and emission...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  12. Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources

    Science.gov (United States)

    Kastner, S. O.; Bhatia, A. K.

    1999-01-01

    Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 tau(sub 0) estimates of the true linewidth and optical thickness.

  13. The WaveD Transform in R: Performs Fast Translation-Invariant Wavelet Deconvolution

    Directory of Open Access Journals (Sweden)

    Marc Raimondo

    2007-04-01

    Full Text Available This paper provides an introduction to a software package called waved making available all code necessary for reproducing the figures in the recently published articles on the WaveD transform for wavelet deconvolution of noisy signals. The forward WaveD transforms and their inverses can be computed using any wavelet from the Meyer family. The WaveD coefficients can be depicted according to time and resolution in several ways for data analysis. The algorithm which implements the translation invariant WaveD transform takes full advantage of the fast Fourier transform (FFT and runs in O(n(log n2

  14. Matrix pentagons

    CERN Document Server

    Belitsky, A V

    2016-01-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multiparticle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unravelled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  15. Renogram and deconvolution parameters in diagnosis of renal artery stenosis. Variants of background subtraction and analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kempi, V. [Dept. of Clinical Physiology, Sjukhuset, Oestersund (Sweden)

    2007-07-01

    Aim: Multivariate statistical methods can be used for objective analysis. The emphasis is on analysing renal function parameters together, not one at a time. The aim is to identify curve parameters useful in making predictions in kidneys with and without renal artery stenosis (RAS). Patients, methods: 68 patients with resistant hypertension were subjected to captopril renography with {sup 99m}Tc-DTPA. Variants of background areas and background subtraction methods were employed. A correction was applied for loss of renal parenchyma. Parameters from time-activity curves and retention curves from deconvolution were calculated. Renal angiography established the presence or absence of RAS. Logistic regression analysis, using age- and kidney size-adjusted models, was performed to assess the capability of renography and deconvolution to differentiate between kidneys with and without RAS. Results: Discrimination between normal kidneys and RAS was achieved by deconvolution and by renography. Deconvolution was the method of first rank with a sensitivity of 87% and a specificity of 98%. For separation of RAS and kidneys with parenchymal insufficiency deconvolution was the method of first rank with a sensitivity of 80% and specificity of 89%, whereas renography produced poor results. Conclusion: The best performance with {sup 99m}Tc-DTPA was based on normalised background subtraction using a rectangular area between the kidneys. Deconvolution produced the most favourable results in the separation of kidneys with and without RAS. For separation of RAS and kidneys with parenchymal insufficiency conventional renography produced poor results. Conceptually, the results of a logistic regression analysis of renal function parameters may raise possibilities in the field of computer-aided diagnosis. (orig.)

  16. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    Science.gov (United States)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  17. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  18. Fast Nonnegative Deconvolution for Spike Train Inference From Population Calcium Imaging

    Science.gov (United States)

    Packer, Adam M.; Machado, Timothy A.; Sippy, Tanya; Babadi, Baktash; Yuste, Rafael; Paninski, Liam

    2010-01-01

    Fluorescent calcium indicators are becoming increasingly popular as a means for observing the spiking activity of large neuronal populations. Unfortunately, extracting the spike train of each neuron from a raw fluorescence movie is a nontrivial problem. This work presents a fast nonnegative deconvolution filter to infer the approximately most likely spike train of each neuron, given the fluorescence observations. This algorithm outperforms optimal linear deconvolution (Wiener filtering) on both simulated and biological data. The performance gains come from restricting the inferred spike trains to be positive (using an interior-point method), unlike the Wiener filter. The algorithm runs in linear time, and is fast enough that even when simultaneously imaging >100 neurons, inference can be performed on the set of all observed traces faster than real time. Performing optimal spatial filtering on the images further refines the inferred spike train estimates. Importantly, all the parameters required to perform the inference can be estimated using only the fluorescence data, obviating the need to perform joint electrophysiological and imaging calibration experiments. PMID:20554834

  19. Discriminating adenocarcinoma from normal colonic mucosa through deconvolution of Raman spectra

    Science.gov (United States)

    Cambraia Lopes, Patricia; Moreira, Joaquim Agostinho; Almeida, Abilio; Esteves, Artur; Gregora, Ivan; Ledinsky, Martin; Lopes, Jose Machado; Henrique, Rui; Oliveira, Albino

    2011-12-01

    In this work, we considered the feasibility of Raman spectroscopy for discriminating between adenocarcinomatous and normal mucosal formalin-fixed colonic tissues. Unlike earlier studies in colorectal cancer, a spectral deconvolution model was implemented to derive spectral information. Eleven samples of human colon were used, and 55 spectra were analyzed. Each spectrum was resolved into 25 bands from 975 to 1720 cm-1, where modes of proteins, lipids, and nucleic acids are observed. From a comparative study of band intensities, those presenting higher differences between tissue types were correlated to biochemical assignments. Results from fitting procedure were further used as inputs for linear discriminant analysis, where combinations of band intensities and intensity ratios were tested, yielding accuracies up to 81%. This analysis yields objective discriminating parameters after fitting optimization. The bands with higher diagnosis relevance detected by spectra deconvolution enable to confine the study to some spectral regions instead of broader ranges. A critical view upon limitations of this approach is presented, along with a comparison of our results to earlier ones obtained in fresh colonic tissues. This enabled to assess the effect of formalin fixation in colonic tissues, and determine its relevance in the present analysis.

  20. Deconvolution of differential OTF (dOTF) to measure high-resolution wavefront structure

    Science.gov (United States)

    Knight, Justin M.; Rodack, Alexander T.; Codona, Johanan L.; Miller, Kelsey L.; Guyon, Olivier

    2015-09-01

    Differential OTF uses two images taken with a telescope pupil modification between them to measure the complex field over most of the pupil. If the pupil modification involves a non-negligible region of the pupil, the dOTF field is blurred by convolution with the complex conjugate of the pupil field change. In some cases, the convolution kernel, or difference field, can cause significant blurring. We explore using deconvolution to recover a highresolution measurement of the complex pupil field. In particular, by assuming we know something about the area and nature of the difference field, we can construct a Wiener filter that increases the resolution of the complex pupil field estimate in the presence of noise. By introducing a controllable pupil modification, such as actuating a telescope primary mirror segment in piston-tip-tilt to make the measurement, we explain added features to the difference field which can be used to increase the signal-to-noise ratio for information in arbitrary ranges of spatial frequency. We will present theory and numerical simulations to discuss key features of the difference field which lead to its utility for deconvolution of dOTF measurements.

  1. Remote heartbeat signal detection from visible spectrum recordings based on blind deconvolution

    Science.gov (United States)

    Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.

    2016-05-01

    While recent advances have shown that it is possible to acquire a signal equivalent to the heartbeat from visual spectrum video recordings of the human skin, extracting the heartbeat's exact timing information from it, for the purpose of heart rate variability analysis, remains a challenge. In this paper, we explore two novel methods to estimate the remote cardiac signal peak positions, aiming at a close representation of the R-peaks of the ECG signal. The first method is based on curve fitting (CF) using a modified filtered least mean square (LMS) optimization and the second method is based on system estimation using blind deconvolution (BDC). To prove the efficacy of the developed algorithms, we compared results obtained with the ground truth (ECG) signal. Both methods achieved a low relative error between the peaks of the two signals. This work, performed under an IRB approved protocol, provides initial proof that blind deconvolution techniques can be used to estimate timing information of the cardiac signal closely correlated to the one obtained by traditional ECG. The results show promise for further development of a remote sensing of cardiac signals for the purpose of remote vital sign and stress detection for medical, security, military and civilian applications.

  2. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    CERN Document Server

    Gonzalez, Adriana; Jacques, Laurent

    2016-01-01

    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. Optics are never perfect and the non-ideal path through the telescope is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Other sources of noise (read-out, Photon) also contaminate the image acquisition process. The problem of estimating both the PSF filter and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, it does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis image prior model and weak assumptions on the PSF filter's response. We use the observations from a celestial body transit where such object can be assumed to be a black disk. Such constraints limits the interchangeabil...

  3. Thorium concentrations in the lunar surface. V - Deconvolution of the central highlands region

    Science.gov (United States)

    Metzger, A. E.; Etchegaray-Ramirez, M. I.; Haines, E. L.

    1982-01-01

    The distribution of thorium in the lunar central highlands measured from orbit by the Apollo 16 gamma-ray spectrometer is subjected to a deconvolution analysis to yield improved spatial resolution and contrast. Use of two overlapping data fields for complete coverage also provides a demonstration of the technique's ability to model concentrations several degrees beyond the data track. Deconvolution reveals an association between Th concentration and the Kant Plateau, Descartes Mountain and Cayley plains surface formations. The Kant Plateau and Descartes Mountains model with Th less than 1 part per million, which is typical of farside highlands but is infrequently seen over any other nearside highland portions of the Apollo 15 and 16 ground tracks. It is noted that, if the Cayley plains are the result of basin-forming impact ejecta, the distribution of Th concentration with longitude supports an origin from the Imbrium basin rather than the Nectaris or Orientale basins. Nectaris basin materials are found to have a Th concentration similar to that of the Descartes Mountains, evidence that the latter may have been emplaced as Nectaris basin impact deposits.

  4. Stochastic model error in the LANS-alpha and NS-alpha deconvolution models of turbulence

    CERN Document Server

    Olson, Eric

    2015-01-01

    This paper reports on a computational study of the model error in the LANS-alpha and NS-alpha deconvolution models of homogeneous isotropic turbulence. The focus is on how well the model error may be characterized by a stochastic force. Computations are also performed for a new turbulence model obtained as a rescaled limit of the deconvolution model. The technique used is to plug a solution obtained from direct numerical simulation of the incompressible Navier--Stokes equations into the competing turbulence models and to then compute the time evolution of the resulting residual. All computations have been done in two dimensions rather than three for convenience and efficiency. When the effective averaging length scale in any of the models is $\\alpha_0=0.01$ the time evolution of the root-mean-squared residual error grows as $\\sqrt t$. This growth rate is consistent with the hypothesis that the model error may be characterized by a stochastic force. When $\\alpha_0=0.20$ the residual error grows linearly. Linea...

  5. A comparison of different peak shapes for deconvolution of alpha-particle spectra

    Energy Technology Data Exchange (ETDEWEB)

    Marzo, Giuseppe A., E-mail: giuseppe.marzo@enea.it

    2016-10-01

    Alpha-particle spectrometry is a standard technique for assessing the sample content in terms of alpha-decaying isotopes. A comparison of spectral deconvolutions performed adopting different peak shape functions has been carried out and a sensitivity analysis has been performed to test for the robustness of the results. As previously observed, there is evidence that the alpha peaks are well reproduced by a Gaussian modified by a function which takes into account the prominent tailing that an alpha-particle spectrum measured by means of a silicon detector exhibits. Among the different peak shape functions considered, that proposed by G. Bortels and P. Collaers, Int. J. Rad. Appl. Instrum. A 38, pp. 831–837 (1987) is the function which provides more accurate and more robust results when the spectral resolution is high enough to make such tailing significant. Otherwise, in the case of lower resolution alpha-particle spectra, simpler peak shape functions which are characterized by a lower number of fitting parameters provide adequate results. The proposed comparison can be useful for selecting the most appropriate peak shape function when accurate spectral deconvolution of alpha-particle spectra is sought.

  6. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    Science.gov (United States)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  7. A blind deconvolution method for ground based telescopes and Fizeau interferometers

    CERN Document Server

    Prato, M; Bonettini, S; Rebegoldi, S; Bertero, M; Boccacci, P

    2015-01-01

    In the case of ground-based telescopes equipped with adaptive optics systems, the point spread function (PSF) is only poorly known or completely unknown. Moreover, an accurate modeling of the PSF is in general not available. Therefore in several imaging situations the so-called blind deconvolution methods, aiming at estimating both the scientific target and the PSF from the detected image, can be useful. A blind deconvolution problem is severely ill-posed and, in order to reduce the extremely large number of possible solutions, it is necessary to introduce sensible constraints on both the scientific target and the PSF. In a previous paper we proposed a sound mathematical approach based on a suitable inexact alternating minimization strategy for minimizing the generalized Kullback-Leibler divergence, assuring global convergence. In the framework of this method we showed that an important constraint on the PSF is the upper bound which can be derived from the knowledge of its Strehl ratio. The efficacy of the ap...

  8. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    Science.gov (United States)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  9. Computational deconvolution of gene expression by individual host cellular subsets from microarray analyses of complex, parasite-infected whole tissues.

    Science.gov (United States)

    Banskota, Nirad; Odegaard, Justin I; Rinaldi, Gabriel; Hsieh, Michael H

    2016-06-01

    Analyses of whole organs from parasite-infected animals can reveal the entirety of the host tissue transcriptome, but conventional approaches make it difficult to dissect out the contributions of individual cellular subsets to observed gene expression. Computational deconvolution of gene expression data may be one solution to this problem. We tested this potential solution by deconvoluting whole bladder gene expression microarray data derived from a model of experimental urogenital schistosomiasis. A supervised technique was used to group B-cell and T-cell related genes based on their cell types, with a semi-supervised technique to calculate the proportions of urothelial cells. We demonstrate that the deconvolution technique was able to group genes into their correct cell types with good accuracy. A clustering-based methodology was also used to improve prediction. However, incorrectly predicted genes could not be discriminated using this methodology. The incorrect predictions were primarily IgH- and IgK-related genes. To our knowledge, this is the first application of computational deconvolution to complex, parasite-infected whole tissues. Other computational techniques such as neural networks may need to be used to improve prediction. Copyright © 2016 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.

  10. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique

    Science.gov (United States)

    Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.

    2005-01-01

    A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…

  11. 3D Marine CSEM Interferometry by Multidimensional Deconvolution in the Wavenumber Domain for a Sparse Receiver Grid

    NARCIS (Netherlands)

    Hunziker, J.W.; Slob, E.C.; Fan, Y.; Snieder, R.; Wapenaar, C.P.A.

    2013-01-01

    We use interferometry by multidimensional deconvolution in combination with synthetic aperture sources in 3D to suppress the airwave and the direct field, and to decrease source uncertainty in marine Controlled-Source electromagnetics. We show with this numerical study that the method works for very

  12. Riemann Zeta Matrix Function

    OpenAIRE

    Kargın, Levent; Kurt, Veli

    2015-01-01

    In this study, obtaining the matrix analog of the Euler's reflection formula for the classical gamma function we expand the domain of the gamma matrix function and give a infinite product expansion of sinπxP.  Furthermore we define Riemann zeta matrix function and evaluate some other matrix integrals. We prove a functional equation for Riemann zeta matrix function.

  13. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis

    Science.gov (United States)

    Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213

  14. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  15. PLSA-based pathological image retrieval for breast cancer with color deconvolution

    Science.gov (United States)

    Ma, Yibing; Shi, Jun; Jiang, Zhiguo; Feng, Hao

    2013-10-01

    Digital pathological image retrieval plays an important role in computer-aided diagnosis for breast cancer. The retrieval results of an unknown pathological image, which are generally previous cases with diagnostic information, can provide doctors with assistance and reference. In this paper, we develop a novel pathological image retrieval method for breast cancer, which is based on stain component and probabilistic latent semantic analysis (pLSA) model. Specifically, the method firstly utilizes color deconvolution to gain the representation of different stain components for cell nuclei and cytoplasm, and then block Gabor features are conducted on cell nuclei, which is used to construct the codebook. Furthermore, the connection between the words of the codebook and the latent topics among images are modeled by pLSA. Therefore, each image can be represented by the topics and also the high-level semantic concepts of image can be described. Experiments on the pathological image database for breast cancer demonstrate the effectiveness of our method.

  16. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...... of the characteristic details of the images. A large proportion of the observed noise may be explained by the scanning actions of the feedback circuitry while a minor fraction of the image details may be explained by surface drift phenomena. As opposed to the method of deconvolution, conventional methods of filtering......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...

  17. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... of the system is its generous input/output bandwidth, that makes it easy to balance the computational load between the processors and prevents data starvation. Due to the use of floating point calculations it is possible to simulate all types of signal processing in modem ultrasound scanners, and this system is...

  18. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  19. Application of computerized glow curve deconvolution to determine the spectroscopy of traps in colorless microcline

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, B. Arunkumar [Department of Radiotherapy, RIMS, Lamphel, Imphal 795004, Manipur (India)], E-mail: arunsb2000@yahoo.co.uk; Singh, A. Nabachandra [Department of Physics, Thoubal College, Thoubal 795138, Manipur (India); Singh, S. Nabadwip [Department of Physics, Kumbi College, Kumbi 795133, Manipur (India); Singh, O. Binoykumar [Department of Physics, Y.K. College, Wangjing 795148, Manipur (India)

    2009-01-15

    Kinetic parameters of glow peaks (as many as 14 in the range of 75-575 deg. C) of colorless microcline have been successfully achieved to a high degree of certainty by resorting to computerized glow curve deconvolution (CGCD) in the framework of kinetics formalism. The second derivative plot of the experimental glow curve is used to locate the hidden glow peaks. The criteria to accept the goodness of fit between the experimental glow curve and the numerically generated best fit curve is judged by statistical test namely, {chi}{sup 2}-test. As a cross check, figure of merit (FOM) is also evaluated. The kinetic parameters of the higher temperature trap electrons of colorless microcline are determined by using lower heating rates.

  20. Fading prediction in thermoluminescent materials using computerised glow curve deconvolution (CGCD)

    CERN Document Server

    Furetta, C; Weng, P S

    1999-01-01

    The fading of three different thermoluminescent (TL) materials, CaF sub 2 : Tm (TLD-300), manocrystalline LiF : Mg,Ti (DTG-4) and MgB sub 4 O sub 7 : Dy,Na has been studied at room temperature and at 50 deg. C of storage. The evolution as a function of the elapsed time of the whole glow curve as well as of the individual peaks has been analysed using the Computerised Glow Curve Deconvolution (CGCD) program developed at the NTHU. The analysis allows to predict the loss of the dosimetric information and to make any correction is necessary for using the TL dosimeters in practical applications. Furthermore, it is well demonstrated that using CGCD it is not necessary to anneal the peaks having a rapid fading to avoid, then, any interfering effect on the more stable peaks.

  1. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    Science.gov (United States)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  2. Standardized Whole-Blood Transcriptional Profiling Enables the Deconvolution of Complex Induced Immune Responses

    Directory of Open Access Journals (Sweden)

    Alejandra Urrutia

    2016-09-01

    Full Text Available Systems approaches for the study of immune signaling pathways have been traditionally based on purified cells or cultured lines. However, in vivo responses involve the coordinated action of multiple cell types, which interact to establish an inflammatory microenvironment. We employed standardized whole-blood stimulation systems to test the hypothesis that responses to Toll-like receptor ligands or whole microbes can be defined by the transcriptional signatures of key cytokines. We found 44 genes, identified using Support Vector Machine learning, that captured the diversity of complex innate immune responses with improved segregation between distinct stimuli. Furthermore, we used donor variability to identify shared inter-cellular pathways and trace cytokine loops involved in gene expression. This provides strategies for dimension reduction of large datasets and deconvolution of innate immune responses applicable for characterizing immunomodulatory molecules. Moreover, we provide an interactive R-Shiny application with healthy donor reference values for induced inflammatory genes.

  3. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  4. Thorium concentrations in the lunar surface. III - Deconvolution of the Apenninus region

    Science.gov (United States)

    Metzger, A. E.; Haines, E. L.; Etchegaray-Ramirez, M. I.; Hawke, B. R.

    1979-01-01

    A technique of deconvoluting orbital-gamma ray data which improves spatial resolution and contrast has been applied to Th concentrations in the Apenninus region of the moon. The highest concentration seen from orbit has been found along the northern edge of the data track at Archimedes, requiring a component more highly fractionated in KREEP than the Apollo 15 medium-K Fra Mauro basalt. The results show generally diminishing Th levels extending outward from the Imbrium Basin, and impact penetration of basalt flows in Mare Imbrium to eject sub-mare Th-rich material. The results reinforce the hypothesis that the highlands which border and underlie the western maria contain a pre-mare layer of volcanically-derived KREEP material.

  5. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  6. Multi-images deconvolution improves signal-to-noise ratio on gated stimulated emission depletion microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Castello, Marco [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy); DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145 (Italy); Diaspro, Alberto [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy); Nikon Imaging Center, Via Morego 30, Genoa 16163 (Italy); Vicidomini, Giuseppe, E-mail: giuseppe.vicidomini@iit.it [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy)

    2014-12-08

    Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated and experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.

  7. LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure

    Science.gov (United States)

    Wang, Qing; Wu, Hao; Ihme, Matthias

    2015-11-01

    The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.

  8. Reduction of blurring in broadband volume holographic imaging using a deconvolution method

    Science.gov (United States)

    Lv, Yanlu; Zhang, Xuanxuan; Zhang, Dong; Zhang, Lin; Luo, Yuan; Luo, Jianwen

    2016-01-01

    Volume holographic imaging (VHI) is a promising biomedical imaging tool that can simultaneously provide multi-depth or multispectral information. When a VHI system is probed with a broadband source, the intensity spreads in the horizontal direction, causing degradation of the image contrast. We theoretically analyzed the reason of the horizontal intensity spread, and the analysis was validated by the simulation and experimental results of the broadband impulse response of the VHI system. We proposed a deconvolution method to reduce the horizontal intensity spread and increase the image contrast. Imaging experiments with three different objects, including bright field illuminated USAF test target and lung tissue specimen and fluorescent beads, were carried out to test the performance of the proposed method. The results demonstrated that the proposed method can significantly improve the horizontal contrast of the image acquire by broadband VHI system. PMID:27570703

  9. Fourier Self-deconvolution Using Approximation Obtained from Frequency Domain Wavelet Transform as a Linear Function

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new method of resolving overlapped peak, Fourier self-deconvolution (FSD) using approximation CN obtained from frequency domain wavelet transform of F(ω) yielded by Fourier transform of overlapped peak signals f(t) as the linear function, was presented in this paper.Compared with classical FSD, the new method exhibits excellent resolution for different overlapped peak signals such as HPLC signals, and have some characteristics such as an extensive applicability for any overlapped peak shape signals and a simple operation because of no selection procedure of the linear function. Its excellent resolution for those different overlapped peak signals is mainly because F(ω) obtained from Fourier transform of f(t) and CN obtained from wavelet transform of F(ω) have the similar linearity and peak width. The effect of those fake peaks can be eliminated by the algorithm proposed by authors. This method has good potential in the process of different overlapped peak signals.

  10. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    CERN Document Server

    Prato, M; Bonettini, S; Bertero, M

    2013-01-01

    In this paper we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback-Leibler divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is nonconvex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of fixed numbers of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. The use of SGP has two advantages: first, it allows to prove global convergence of the blind method; secondly, it allows the introduction of different const...

  11. Fast Total-Variation Image Deconvolution with Adaptive Parameter Estimation via Split Bregman Method

    Directory of Open Access Journals (Sweden)

    Chuan He

    2014-01-01

    Full Text Available The total-variation (TV regularization has been widely used in image restoration domain, due to its attractive edge preservation ability. However, the estimation of the regularization parameter, which balances the TV regularization term and the data-fidelity term, is a difficult problem. In this paper, based on the classical split Bregman method, a new fast algorithm is derived to simultaneously estimate the regularization parameter and to restore the blurred image. In each iteration, the regularization parameter is refreshed conveniently in a closed form according to Morozov’s discrepancy principle. Numerical experiments in image deconvolution show that the proposed algorithm outperforms some state-of-the-art methods both in accuracy and in speed.

  12. A Direct Cortico-Nigral Pathway as Revealed by Constrained Spherical Deconvolution Tractography in Humans

    Directory of Open Access Journals (Sweden)

    Alberto Cacciola

    2016-07-01

    Full Text Available Substantia nigra is an important neuronal structure, located in the ventral midbrain, that exerts a regulatory function within the basal ganglia circuitry through the nigro-striatal pathway. Although its subcortical connections are relatively well known in human brain, very little is known about its cortical connections. The existence of a direct cortico-nigral pathway has been demonstrated in rodents and primates but only hypothesized in humans. In this study, we aimed at evaluating cortical connections of substantia nigra in vivo in human brain, by using probabilistic constrained spherical deconvolution tractography on magnetic resonance diffusion weighted imaging data. We found that substantia nigra is connected with cerebral cortex as a whole, with the most representative connections involving prefrontal cortex, precentral and postcentral gyri and superior parietal lobule. These results may be relevant for the comprehension of the pathophysiology of several neurological disorders involving substantia nigra, such as parkinson’s disease, schizophrenia and pathological addictions.

  13. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated...... by a convolution of the distribution of the actual sources and the beamformer's point-spread function, defined as the beamformer's response to a point source. By deconvolving the resulting map, the resolution is improved, and the side-lobes effect is reduced or even eliminated compared to conventional beamforming....... Even though these methods were originally designed for planar sparse arrays, in the present study, they are adapted to uniform circular arrays for mapping the sound over 360°. This geometry has the advantage that the beamforming output is practically independent of the focusing direction, meaning...

  14. Deconvolution as a means of correcting turbulence power spectra measured by LDA

    Science.gov (United States)

    Buchhave, Preben; Velte, Clara

    2014-11-01

    Measurement of turbulence power spectra by means of laser Doppler anemometry (LDA) has proven to be a difficult task. Among the problems affecting the shape of the spectrum are noise in the signal and changes in the sample rate caused by unintentional effects in the measuring apparatus or even in the mathematical algorithms used to evaluate the spectrum. We analyze the effect of various causes of bias in the sample rate and show that the effect is a convolution of the true spectrum with various spectral functions. We show that these spectral functions can be measured with the available data from a standard LDA processor and we use this knowledge to correct the measured spectrum by deconvolution. We present results supported by realistic computer generated data using two different spectral estimators, the so-called slotted autocovariance method and the so-called direct method.

  15. Electronic field free line rotation and relaxation deconvolution in magnetic particle imaging.

    Science.gov (United States)

    Bente, Klaas; Weber, Matthias; Graeser, Matthias; Sattel, Timo F; Erbe, Marlitt; Buzug, Thorsten M

    2015-02-01

    It has been shown that magnetic particle imaging (MPI), an imaging method suggested in 2005, is capable of measuring the spatial distribution of magnetic nanoparticles. Since the particles can be administered as biocompatible suspensions, this method promises to perform well as a tracer-based medical imaging technique. It is capable of generating real-time images, which will be useful in interventional procedures, without utilizing any harmful radiation. To obtain a signal from the administered superparamagnetic iron oxide (SPIO) particles, a sinusoidal changing external homogeneous magnetic field is applied. To achieve spatial encoding, a gradient field is superimposed. Conventional MPI works with a spatial encoding field that features a field free point (FFP). To increase sensitivity, an improved spatial encoding field, featuring a field free line (FFL) can be used. Previous FFL scanners, featuring a 1-D excitation, could demonstrate the feasibility of the FFL-based MPI imaging process. In this work, an FFL-based MPI scanner is presented that features a 2-D excitation field and, for the first time, an electronic rotation of the spatial encoding field. Furthermore, the role of relaxation effects in MPI is starting to move to the center of interest. Nevertheless, no reconstruction schemes presented thus far include a dynamical particle model for image reconstruction. A first application of a model that accounts for relaxation effects in the reconstruction of MPI images is presented here in the form of a simplified, but well performing strategy for signal deconvolution. The results demonstrate the high impact of relaxation deconvolution on the MPI imaging process.

  16. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  17. Kriging and Semivariogram Deconvolution in the Presence of Irregular Geographical Units

    Science.gov (United States)

    Goovaerts, Pierre

    2008-01-01

    This paper presents a methodology to conduct geostatistical variography and interpolation on areal data measured over geographical units (or blocks) with different sizes and shapes, while accounting for heterogeneous weight or kernel functions within those units. The deconvolution method is iterative and seeks the pointsupport model that minimizes the difference between the theoretically regularized semivariogram model and the model fitted to areal data. This model is then used in area-to-point (ATP) kriging to map the spatial distribution of the attribute of interest within each geographical unit. The coherence constraint ensures that the weighted average of kriged estimates equals the areal datum. This approach is illustrated using health data (cancer rates aggregated at the county level) and population density surface as a kernel function. Simulations are conducted over two regions with contrasting county geographies: the state of Indiana and four states in the Western United States. In both regions, the deconvolution approach yields a point support semivariogram model that is reasonably close to the semivariogram of simulated point values. The use of this model in ATP kriging yields a more accurate prediction than a naïve point kriging of areal data that simply collapses each county into its geographic centroid. ATP kriging reduces the smoothing effect and is robust with respect to small differences in the point support semivariogram model. Important features of the point-support semivariogram, such as the nugget effect, can never be fully validated from areal data. The user may want to narrow down the set of solutions based on his knowledge of the phenomenon (e.g., set the nugget effect to zero). The approach presented avoids the visual bias associated with the interpretation of choropleth maps and should facilitate the analysis of relationships between variables measured over different spatial supports. PMID:18725997

  18. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    Directory of Open Access Journals (Sweden)

    Erick J Canales-Rodríguez

    Full Text Available Spherical deconvolution (SD methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.

  19. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization.

    Science.gov (United States)

    Canales-Rodríguez, Erick J; Daducci, Alessandro; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data.

  20. Reconstruction of high-resolution time series from slow-response atmospheric measurements by deconvolution

    Science.gov (United States)

    Ehrlich, André; Wendisch, Manfred

    2017-04-01

    Measurements of high temporal resolution are often needed to study the spatial or temporal variation of atmospheric parameters. An efficient method to enhance the temporal resolution of slow-response measurements is introduced. It is based on the deconvolution theorem of Fourier transform to restore amplitude and phase shift of high frequent fluctuations. It is shown that the quality of reconstruction depends on the instrument noise, the sensor response time and the frequency of the oscillations. The method is demonstrated by application to measurements of broadband terrestrial irradiance using pyrgeometer and temperature and humidity measurements by drop sondes. Using a CGR-4 pyrgeometer with response time of 3 s, the method is tested in laboratory measurements for synthetic time series including a boxcar function and periodic oscillations. The originally slow-response pyrgeometer data were reconstructed to higher resolution and compared to the predefined synthetic time series. The reconstruction of the time series worked up to oscillations of 0.5 Hz frequency and 2 W m-2 amplitude if the sampling frequency of the data acquisition is 16 kHz or higher. For oscillations faster than 2 Hz, the instrument noise exceeded the reduced amplitude of the oscillations in the measurements and the reconstruction failed. The method was applied to airborne measurements of upward terrestrial irradiance and drop sonde profiles from the VERDI (Vertical Distribution of Ice in Arctic Clouds) field campaign. Pyrgeometer data above open leads in sea ice and a broken cloud field were reconstructed and compared to KT19 infrared thermometer data. The reconstruction of amplitude and phase shift of the deconvoluted data improved the agreement with the KT19 data and removed biases for the maximum and minimum values. By application to temperature and humidity profiles measured by drop sonde profiles, the resolution of the cloud top inversion cloud be improved.

  1. Reconstructing the genomic content of microbiome taxa through shotgun metagenomic deconvolution.

    Science.gov (United States)

    Carr, Rogan; Shen-Orr, Shai S; Borenstein, Elhanan

    2013-01-01

    Metagenomics has transformed our understanding of the microbial world, allowing researchers to bypass the need to isolate and culture individual taxa and to directly characterize both the taxonomic and gene compositions of environmental samples. However, associating the genes found in a metagenomic sample with the specific taxa of origin remains a critical challenge. Existing binning methods, based on nucleotide composition or alignment to reference genomes allow only a coarse-grained classification and rely heavily on the availability of sequenced genomes from closely related taxa. Here, we introduce a novel computational framework, integrating variation in gene abundances across multiple samples with taxonomic abundance data to deconvolve metagenomic samples into taxa-specific gene profiles and to reconstruct the genomic content of community members. This assembly-free method is not bounded by various factors limiting previously described methods of metagenomic binning or metagenomic assembly and represents a fundamentally different approach to metagenomic-based genome reconstruction. An implementation of this framework is available at http://elbo.gs.washington.edu/software.html. We first describe the mathematical foundations of our framework and discuss considerations for implementing its various components. We demonstrate the ability of this framework to accurately deconvolve a set of metagenomic samples and to recover the gene content of individual taxa using synthetic metagenomic samples. We specifically characterize determinants of prediction accuracy and examine the impact of annotation errors on the reconstructed genomes. We finally apply metagenomic deconvolution to samples from the Human Microbiome Project, successfully reconstructing genus-level genomic content of various microbial genera, based solely on variation in gene count. These reconstructed genera are shown to correctly capture genus-specific properties. With the accumulation of metagenomic

  2. A versatile real-time deconvolution DSP system implemented using a time domain inverse filter

    Science.gov (United States)

    Gaydecki, Patrick

    2001-01-01

    A proof-of-principle, digital signal processing system is described which can perform deconvolution of audio-bandwidth signals in real time, enabling separation and precise measurement of pulses smeared by a given impulse response. The system operates by convolving a time-domain expression of an inverse filter with the original signal to generate a processed output. It incorporates a high-level user interface for the design of the inverse filter, a communications system and a purpose-designed digital signal processing environment employing a Motorola DSP56002 device. The user interface is extremely versatile, allowing arbitrary inverse filters to be designed and executed within seconds, using a modified frequency sampling method. Since the inverse filters are realized using a symmetrical finite impulse response, no phase distortion is introduced into the processed signals. A special feature of the design is the manner in which the software and hardware components have been organized as an intelligent system, obviating on the part of the user a detailed knowledge of filter design theory or any abilities in processor architecture and assembly code programming. At the present time, the system is capable of deconvolving signals sampled up to 48 kHz. It is therefore ideally suited for real-time audio enhancement, for example, in telephony, public address and long-range broadcast systems, and in compensating for building or room acoustics. Recent advances in DSP technology will enable the same system structure to be applied to signals sampled at frequencies ten times this rate and beyond. This will allow the real-time deconvolution of low-frequency ultrasonic signals used in the inspection and imaging of heterogeneous media.

  3. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  4. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  5. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    Science.gov (United States)

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  6. INFORMATION FUSION STEADY-STATE WHITE NOISE DECONVOLUTION ESTIMATORS WITH TIME-DELAYED MEASUREMENTS AND COLORED MEASUREMENT NOISES

    Institute of Scientific and Technical Information of China (English)

    Sun Xiaojun; Deng Zili

    2009-01-01

    White noise deconvolution or input white noise estimation problem has important application backgrounds in oil seismic exploration, communication and signal processing. By the modern time series analysis method, based on the Auto-Regressive Moving Average (ARMA) innovation model, under the linear minimum variance optimal fusion rules, three optimal weighted fusion white noise deconvolution estimators are presented for the multisensor systems with time-delayed measurements and colored measurement noises. They can handle the input white noise fused filtering, prediction and smoothing problems. The accuracy of the fusers is higher than that of each local white noise estimator. In order to compute the optimal weights, the formula of computing the local estimation error cross-covariances is given. A Monte Carlo simulation example for the system with 3 sensors and the Bernoulli-Gaussian input white noise shows their effectiveness and performances.

  7. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  8. Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and total variation spatial regularization

    CERN Document Server

    Canales-Rodríguez, Erick J; Sotiropoulos, Stamatios N; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Mendizabal, Yosu Yurramendi; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2014-01-01

    Due to a higher capability in resolving white matter fiber crossings, Spherical Deconvolution (SD) methods have become very popular in brain fiber-tracking applications. However, while some of these estimation algorithms assume a central Gaussian distribution for the MRI noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique intended to deal with realistic MRI noise. The algorithm relies on a maximum a posteriori formulation based on Rician and noncentral Chi likelihood models and includes a total variation (TV) spatial regularization term. By means of a synthetic phantom contaminated with noise mimicking patterns generated by data processing in mu...

  9. An efficient de-convolution reconstruction method for spatiotemporal-encoding single-scan 2D MRI.

    Science.gov (United States)

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Li, Jing; Chen, Ying; Bao, Lijun; Chen, Zhong

    2013-03-01

    Spatiotemporal-encoding single-scan MRI method is relatively insensitive to field inhomogeneity compared to EPI method. Conjugate gradient (CG) method has been used to reconstruct super-resolved images from the original blurred ones based on coarse magnitude-calculation. In this article, a new de-convolution reconstruction method is proposed. Through removing the quadratic phase modulation from the signal acquired with spatiotemporal-encoding MRI, the signal can be described as a convolution of desired super-resolved image and a point spread function. The de-convolution method proposed herein not only is simpler than the CG method, but also provides super-resolved images with better quality. This new reconstruction method may make the spatiotemporal-encoding 2D MRI technique more valuable for clinic applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Modelling the semivariograms and cross-semivariograms required in downscaling cokriging by numerical convolution deconvolution

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Atkinson, Peter M.

    2007-10-01

    close to the origin presented in regularized semivariograms and cross-semivariograms. The solution proposed is to find by numerical deconvolution a positive-definite set of point covariances and cross-covariances and then any required model may be obtained by numerical convolution of the corresponding point model. The first step implies several numerical deconvolutions where some model parameters are fixed, while others are estimated using the available experimental semivariograms and cross-semivariograms, and some goodness-of-fit measure. The details of the proposed procedure are presented and illustrated with an example from remote sensing.

  11. Partial volume correction of brain PET studies using iterative deconvolution in combination with HYPR denoising.

    Science.gov (United States)

    Golla, Sandeep S V; Lubberink, Mark; van Berckel, Bart N M; Lammertsma, Adriaan A; Boellaard, Ronald

    2017-12-01

    Accurate quantification of PET studies depends on the spatial resolution of the PET data. The commonly limited PET resolution results in partial volume effects (PVE). Iterative deconvolution methods (IDM) have been proposed as a means to correct for PVE. IDM improves spatial resolution of PET studies without the need for structural information (e.g. MR scans). On the other hand, deconvolution also increases noise, which results in lower signal-to-noise ratios (SNR). The aim of this study was to implement IDM in combination with HighlY constrained back-PRojection (HYPR) denoising to mitigate poor SNR properties of conventional IDM. An anthropomorphic Hoffman brain phantom was filled with an [(18)F]FDG solution of ~25 kBq mL(-1) and scanned for 30 min on a Philips Ingenuity TF PET/CT scanner (Philips, Cleveland, USA) using a dynamic brain protocol with various frame durations ranging from 10 to 300 s. Van Cittert IDM was used for PVC of the scans. In addition, HYPR was used to improve SNR of the dynamic PET images, applying it both before and/or after IDM. The Hoffman phantom dataset was used to optimise IDM parameters (number of iterations, type of algorithm, with/without HYPR) and the order of HYPR implementation based on the best average agreement of measured and actual activity concentrations in the regions. Next, dynamic [(11)C]flumazenil (five healthy subjects) and [(11)C]PIB (four healthy subjects and four patients with Alzheimer's disease) scans were used to assess the impact of IDM with and without HYPR on plasma input-derived distribution volumes (V T) across various regions of the brain. In the case of [(11)C]flumazenil scans, Hypr-IDM-Hypr showed an increase of 5 to 20% in the regional V T whereas a 0 to 10% increase or decrease was seen in the case of [(11)C]PIB depending on the volume of interest or type of subject (healthy or patient). References for these comparisons were the V Ts from the PVE-uncorrected scans. IDM improved quantitative accuracy

  12. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.

  13. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    Science.gov (United States)

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  14. Alternative applications of the method of moments: from electromagnetic waves to source synthesis, deconvolution, and data processing in navigation systems

    Science.gov (United States)

    Tamas, Razvan; Dumitrascu, Ana; Caruntu, George

    2015-02-01

    The method of moments is mostly used in electromagnetism to solve linear operator equations. In this paper, we present three different, alternative applications of this numerical technique: resolution of the integral equation of convolution i.e., deconvolution, synthesis of electromagnetic radiators optimized to yield a given time-domain or frequency-domain response, and processing of data provided by an inertial navigation system, with the aim to decompose a complex displacement into elementary movements.

  15. Application of deconvolution techniques in X-ray spectra; Aplicacion de tecnica de deconvolucion en espectros de rayos X

    Energy Technology Data Exchange (ETDEWEB)

    Burgos Garcia, D.; Sancho Llerandi, C.; Saez Vergara, J. C.; Correa Garces, E.; Lanzas Sanchez, M. R.; Herbella Blazquez, M.

    2011-07-01

    The decay of Am-241, like that of Pu-239 is accompanied by the characteristic X-ray emission in this case, the Np-237. Being atomic number elements in a row, the X-ray emission of U (Z = 92) and Np (Z93) are very similar energies and thus inevitably overlapping photopeaks in the spectrum. This raises the question whether it is appropriate to try to separate their respective contributions in the spectrum, using spectral deconvolution techniques.

  16. Super-Resolution and De-convolution for Single/Multi Gray Scale Images Using SIFT Algorithm

    OpenAIRE

    Ritu Soni; Siddharth Singh Chouhan

    2014-01-01

    This paper represent a Blind algorithm that restore the blurred images for single image and multi-image blur de-convolution and multi-image super-resolution on low-resolution images deteriorated by additive white Gaussian noise ,the aliasing and linear space-invariant. Image De-blurring is a field of Image Processing in which recovering an original and sharp image from a corrupted image. Proposed method is based on alternating minimization algorithm with respect to unidentifie...

  17. Liquid chromatography with diode array detection combined with spectral deconvolution for the analysis of some diterpene esters in Arabica coffee brew.

    Science.gov (United States)

    Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda

    2015-02-01

    In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector.

  18. Reduced Time CT Perfusion Acquisitions Are Sufficient to Measure the Permeability Surface Area Product with a Deconvolution Method

    Directory of Open Access Journals (Sweden)

    Francesco Giuseppe Mazzei

    2014-01-01

    Full Text Available Objective. To reduce the radiation dose, reduced time CT perfusion (CTp acquisitions are tested to measure permeability surface (PS with a deconvolution method. Methods and Materials. PS was calculated with repeated measurements (n=305 while truncating the time density curve (TDC at different time values in 14 CTp studies using CTp 4D software (GE Healthcare, Milwaukee, WI, US. The median acquisition time of CTp studies was 59.35 sec (range 49–92 seconds. To verify the accuracy of the deconvolution algorithm, a variation of the truncated PS within the error measurements was searched, that is, within 3 standard deviations from the mean nominal error provided by the software. The test was also performed for all the remaining CTp parameters measured. Results. PS maximum variability happened within 25 seconds. The PS became constant after 40 seconds for the majority of the active tumors (10/11, while for necrotic tissues it was consistent within 1% after 50 seconds. A consistent result lasted for all the observed CTp parameters, as expected from their analytical dependance. Conclusion. 40-second acquisition time could be an optimal compromise to obtain an accurate measurement of the PS and a reasonable dose exposure with a deconvolution method.

  19. The software package AIRY 7.0: new efficient deconvolution methods for post-adaptive optics data

    Science.gov (United States)

    La Camera, Andrea; Carbillet, Marcel; Prato, Marco; Boccacci, Patrizia; Bertero, Mario

    2016-07-01

    The Software Package AIRY (acronym of Astronomical Image Restoration in interferometrY) is a complete tool for the simulation and the deconvolution of astronomical images. The data can be a post-adaptive-optics image of a single dish telescope or a set of multiple images of a Fizeau interferometer. Written in IDL and freely downloadable, AIRY is a package of the CAOS Problem-Solving Environment. It is made of different modules, each one performing a specific task, e.g. simulation, deconvolution, and analysis of the data. In this paper we present the last version of AIRY containing a new optimized method for the deconvolution problem based on the scaled-gradient projection (SGP) algorithm extended with different regularization functions. Moreover a new module based on our multi-component method is added to AIRY. Finally we provide a few example projects describing our multi-step method recently developed for deblurring of high dynamic range images. By using AIRY v.7.0, users have a powerful tool for simulating the observations and for reconstructing their real data.

  20. Super-resolution non-parametric deconvolution in modelling the radial response function of a parallel plate ionization chamber.

    Science.gov (United States)

    Kulmala, A; Tenhunen, M

    2012-11-07

    The signal of the dosimetric detector is generally dependent on the shape and size of the sensitive volume of the detector. In order to optimize the performance of the detector and reliability of the output signal the effect of the detector size should be corrected or, at least, taken into account. The response of the detector can be modelled using the convolution theorem that connects the system input (actual dose), output (measured result) and the effect of the detector (response function) by a linear convolution operator. We have developed the super-resolution and non-parametric deconvolution method for determination of the cylinder symmetric ionization chamber radial response function. We have demonstrated that the presented deconvolution method is able to determine the radial response for the Roos parallel plate ionization chamber with a better than 0.5 mm correspondence with the physical measures of the chamber. In addition, the performance of the method was proved by the excellent agreement between the output factors of the stereotactic conical collimators (4-20 mm diameter) measured by the Roos chamber, where the detector size is larger than the measured field, and the reference detector (diode). The presented deconvolution method has a potential in providing reference data for more accurate physical models of the ionization chamber as well as for improving and enhancing the performance of the detectors in specific dosimetric problems.

  1. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  2. Spectral deconvolution and operational use of stripping ratios in airborne radiometrics.

    Science.gov (United States)

    Allyson, J D; Sanderson, D C

    2001-01-01

    Spectral deconvolution using stripping ratios for a set of pre-defined energy windows is the simplest means of reducing the most important part of gamma-ray spectral information. In this way, the effective interferences between the measured peaks are removed, leading, through a calibration, to clear estimates of radionuclide inventory. While laboratory measurements of stripping ratios are relatively easy to acquire, with detectors placed above small-scale calibration pads of known radionuclide concentrations, the extrapolation to measurements at altitudes where airborne survey detectors are used bring difficulties such as air-path attenuation and greater uncertainties in knowing ground level inventories. Stripping ratios are altitude dependent, and laboratory measurements using various absorbers to simulate the air-path have been used with some success. Full-scale measurements from an aircraft require a suitable location where radionuclide concentrations vary little over the field of view of the detector (which may be hundreds of metres). Monte Carlo simulations offer the potential of full-scale reproduction of gamma-ray transport and detection mechanisms. Investigations have been made to evaluate stripping ratios using experimental and Monte Carlo methods.

  3. Measurement and analysis of postsynaptic potentials using a novel voltage-deconvolution method.

    Science.gov (United States)

    Richardson, Magnus J E; Silberberg, Gilad

    2008-02-01

    Accurate measurement of postsynaptic potential amplitudes is a central requirement for the quantification of synaptic strength, dynamics of short-term and long-term plasticity, and vesicle-release statistics. However, the intracellular voltage is a filtered version of the underlying synaptic signal and so a method of accounting for the distortion caused by overlapping postsynaptic potentials must be used. Here a voltage-deconvolution technique is demonstrated that defilters the entire voltage trace to reveal an underlying signal of well-separated synaptic events. These isolated events can be cropped out and reconvolved to yield a set of isolated postsynaptic potentials from which voltage amplitudes may be measured directly-greatly simplifying this common task. The method also has the significant advantage of providing a higher temporal resolution of the dynamics of the underlying synaptic signal. The versatility of the method is demonstrated by a variety of experimental examples, including excitatory and inhibitory connections to neurons with passive membranes and those with activated voltage-gated currents. The deconvolved current-clamp voltage has many features in common with voltage-clamp current measurements. These similarities are analyzed using cable theory and a multicompartment cell reconstruction, as well as direct comparison to voltage-clamp experiments.

  4. Deconvoluting chain heterogeneities from driven translocation through a nano-pore

    CERN Document Server

    Adhikari, Ramesh

    2014-01-01

    We study translocation dynamics of a driven compressible semi-flexible chain consisting of alternate blocks of stiff ($S$) and flexible ($F$) segments of size $m$ and $n$ respectively for different chain length $N$. The free parameters in the model are the bending rigidity $\\kappa_b$ which controls the three body interaction term, the elastic constant $k_F$ in the FENE (bond) potential between successive monomers, as well as the block lengths $m$ and $n$ and the repeat unit $p$ ($N=m_pn_p$). We demonstrate that the due to change in the entropic barrier and the inhomogeneous friction on the chain a variety of scenario are possible amply manifested in the incremental mean first passage time (IMFPT) or in the waiting time distribution of the translocating chain. These informations can be deconvoluted to extract information about the mechanical properties of the chain at various length scales and thus can be used to nanopore based methods to probe biomolecules, such as DNA, RNA and proteins.

  5. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    Science.gov (United States)

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  6. Gene Expression Deconvolution for Uncovering Molecular Signatures in Response to Therapy in Juvenile Idiopathic Arthritis.

    Directory of Open Access Journals (Sweden)

    Ang Cui

    Full Text Available Gene expression-based signatures help identify pathways relevant to diseases and treatments, but are challenging to construct when there is a diversity of disease mechanisms and treatments in patients with complex diseases. To overcome this challenge, we present a new application of an in silico gene expression deconvolution method, ISOpure-S1, and apply it to identify a common gene expression signature corresponding to response to treatment in 33 juvenile idiopathic arthritis (JIA patients. Using pre- and post-treatment gene expression profiles only, we found a gene expression signature that significantly correlated with a reduction in the number of joints with active arthritis, a measure of clinical outcome (Spearman rho = 0.44, p = 0.040, Bonferroni correction. This signature may be associated with a decrease in T-cells, monocytes, neutrophils and platelets. The products of most differentially expressed genes include known biomarkers for JIA such as major histocompatibility complexes and interleukins, as well as novel biomarkers including α-defensins. This method is readily applicable to expression datasets of other complex diseases to uncover shared mechanistic patterns in heterogeneous samples.

  7. Evaluation of trapping parameter of quartz by deconvolution of the glow curves

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, R.K. [Department of Physics, Manipur University, Imphal 795001 (India); Singh, L. Lovedy, E-mail: lovedyo1@yahoo.co.in [Department of Physics, Manipur University, Imphal 795001 (India)

    2011-08-15

    The glow curves of natural quartz excited with different doses of {beta}-irradiation have been subjected to Computerized Glow Curve Deconvolution (CGCD) in the kinetic formalism. The location of the constituent peaks, which are as many as eleven in the temperature region of 27-575 deg. C, has been ascertained by resorting to the second order derivative plot of the glow curve. Not only figure of merit (FOM) but {chi}{sup 2}-test has also been taken as a criterion for the acceptance of goodness of fit. CGCD analysis reveals that the frequency factor of quartz is in the range of 1.50 {+-} 0.26 x 10{sup 11} sec{sup -1}. This analysis lead to the conclusion that the trapping levels of quartz can be approximated by the Urbach's relation E = 27kT{sub m} where T{sub m} is the temperature at the maximum intensity. - Highlights: > Glow curves of natural and beta-irradiated quartz in the temperature range from room temperature to 573 deg. C is analysed. > Frequency factor of quartz is in the range of 1.50 {+-} 0.26 x 10{sup 11} sec{sup -1}. > Trapping levels of quartz can be approximated by the Urbach's relation E = 27kT{sub m}.

  8. Scatter correction in CBCT with an offset detector through a deconvolution method using data consistency

    Science.gov (United States)

    Kim, Changhwan; Park, Miran; Lee, Hoyeon; Cho, Seungryong

    2016-03-01

    Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.

  9. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets.

    Science.gov (United States)

    Wüstner, Daniel; Faergeman, Nils J

    2008-08-01

    Intrinsically fluorescent sterols, like dehydroergosterol (DHE), mimic cholesterol closely and are therefore suitable to determine cholesterol transport by fluorescence microscopy. Disadvantages of DHE are its low quantum yield, rapid bleaching, and the fact that its excitation and emission is in the UV region of the spectrum. Thus, one has to deal with chromatic aberration and low signal-to-noise ratio. We developed a method to correct for chromatic aberration between the UV channel and the red/green channel in multicolor imaging of DHE compared with the lipid droplet marker Nile Red in living macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize and quantify sterol content of lipid droplets in living cells with potential for automated high content screening of cellular sterol transport.

  10. Convergence and Optimality of Adaptive Regularization for Ill-posed Deconvolution Problems in Infinite Spaces

    Institute of Scientific and Technical Information of China (English)

    Yan-fei Wang; Qing-hua Ma

    2006-01-01

    The adaptive regularization method is first proposed by Ryzhikov et al. in [6] for the deconvolution in elimination of multiples which appear frequently in geoscience and remote sensing. They have done experiments to show that this method is very effective. This method is better than the Tikhonov regularization in the sense that it is adaptive, i.e., it automatically eliminates the small eigenvalues of the operator when the operator is near singular. In this paper, we give theoretical analysis about the adaptive regularization. We introduce an a priori strategy and an a posteriori strategy for choosing the regularization parameter, and prove regularities of the adaptive regularization for both strategies. For the former, we show that the order of the convergence rate can approach 2Ov(‖n‖ 4v/4v+1 ) for some 0 < v < 1, while for the latter, the order of the convergence rate can be at most O(‖n‖ 2v/2v+1) for some 0 < v < 1.

  11. Z-scan fluorescence profile deconvolution of cytosolic and membrane-associated protein populations.

    Science.gov (United States)

    Smith, Elizabeth M; Hennen, Jared; Chen, Yan; Mueller, Joachim D

    2015-07-01

    This study introduces a technique that characterizes the spatial distribution of peripheral membrane proteins that associate reversibly with the plasma membrane. An axial scan through the cell generates a z-scan intensity profile of a fluorescently labeled peripheral membrane protein. This profile is analytically separated into membrane and cytoplasmic components by accounting for both the cell geometry and the point spread function. We experimentally validated the technique and characterized both the resolvability and stability of z-scan measurements. Furthermore, using the cellular brightness of green fluorescent protein, we were able to convert the fluorescence intensities into concentrations at the membrane and in the cytoplasm. We applied the technique to study the translocation of the pleckstrin homology domain of phospholipase C delta 1 labeled with green fluorescent protein on ionomycin treatment. Analysis of the z-scan fluorescence profiles revealed protein-specific cell height changes and allowed for comparison between the observed fluorescence changes and predictions based on the cellular surface area-to-volume ratio. The quantitative capability of z-scan fluorescence profile deconvolution offers opportunities for investigating peripheral membrane proteins in the living cell that were previously not accessible. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Early Fault Diagnosis of Bearings Using an Improved Spectral Kurtosis by Maximum Correlated Kurtosis Deconvolution

    Science.gov (United States)

    Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing

    2015-01-01

    The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501

  13. Iterative Blind Deconvolution Algorithm for Deblurring PSP Image of Rotating Surfaces

    Science.gov (United States)

    Pandey, Anshuman; Gregory, James

    2015-11-01

    Fast Pressure-Sensitive Paint (PSP) is used in this work to measure unsteady surface pressures on rotating bodies, with iterative image deblurring schemes being developed to correct for image blur at high rotation rates. A significant amount of rotational blur can occur in PSP images acquired in the lifetime mode when the time scale of luminescent decay is long relative to the rotational speed. Image deblurring schemes have been developed to address this problem, but are not currently able to handle strong pressure gradients. Since the local point spread function at each point on the rotor depends on the unknown pressure, restoring such an image is a spatially-varying blind deconvolution problem. An iterative scheme based on the lifetime decay characteristics of PSP has been developed for restoring this image. The scheme estimates the spatially-varying blur kernel without filtering the blurred image and then restores it using classical iterative regularization tools. The resulting scheme is evaluated using computationally-generated pressure fields with strong gradients, as well as experimental data with strong gradients in luminescent lifetime due to a nitrogen jet. Factors such as convergence, image noise, and regularization-iteration count are studied in this work. Funded by the U.S. Government under Agreement No. W911W6-11-2-0010 through the Georgia Tech Vertical Lift Research Center of Excellence.

  14. Features of Blastocystis spp. in xenic culture revealed by deconvolutional microscopy.

    Science.gov (United States)

    Nagel, Robyn; Gray, Christian; Bielefeldt-Ohmann, Helle; Traub, Rebecca J

    2015-09-01

    Blastocystis spp. are common human enteric parasites with complex morphology and have been reported to cause irritable bowel syndrome (IBS). Deconvolutional microscopy with time-lapse imaging and fluorescent spectroscopy of xenic cultures of Blastocystis spp. from stool samples of IBS patients and from asymptomatic, healthy pigs allowed observations of living organisms in their natural microbial environment. Blastocystis organisms of the vacuolated, granular, amoebic and cystic forms were observed to autofluorescence in the 557/576 emission spectra. Autofluorescence could be distinguished from fluorescein-conjugated Blastocystis-specific antibody labelling in vacuolated and granular forms. This antibody labelled Blastocystis subtypes 1, 3 and 4 but not 5. Surface pores of 1 μm in diameter were observed cyclically opening and closing over 24 h. Vacuolated forms extruded a viscous material from a single surface point with coincident deflation that may demonstrate osmoregulation. Tear-shaped granules were observed exiting from the surface of an amoebic form, but their origin and identity remain unknown.

  15. The use of deconvolution techniques to identify the fundamental mixing characteristics of urban drainage structures.

    Science.gov (United States)

    Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G

    2010-01-01

    Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.

  16. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Science.gov (United States)

    Födisch, P.; Wohsmann, J.; Lange, B.; Schönherr, J.; Enghardt, W.; Kaever, P.

    2016-09-01

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  17. Restoration of solar and star images with phase diversity-based blind deconvolution

    Institute of Scientific and Technical Information of China (English)

    Qiang Li; Sheng Liao; Honggang Wei; Mangzuo Shen

    2007-01-01

    The images recorded by a ground-based telescope are often degraded by atmospheric turbulence and the aberration of the optical system. Phase diversity-based blind deconvolution is an effective post-processing method that can be used to overcome the turbulence-induced degradation. The method uses an ensemble of short-exposure images obtained imultaneously from multiple cameras to jointly estimate the object and the wavefront distribution on pupil. Based on signal estimation theory and optimization theory, we derive the cost function and solve the large-scale optimization problem using a limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method. We apply the method to the urbulence degraded images generated with computer, the solar images acquired with the swedish vacuum solar telescope (SVST, 0.475m) in La Paima and the star images collected with 1.2-m telescope in Yunnan Observatory. In order to avoid edge effect in the restoration of the solar images, a modified Hanning apodized window is adopted.The star image till can be estored when the defocus distance is measured inaccurately. The restored results demonstrate that the method is efficient for removing the effect of turbulence and reconstructing the point-like or extended objects.

  18. A distance-driven deconvolution method for CT image-resolution improvement

    Science.gov (United States)

    Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon

    2016-12-01

    The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.

  19. Blind deconvolution with principal components analysis for wide-field and small-aperture telescopes

    Science.gov (United States)

    Jia, Peng; Sun, Rongyu; Wang, Weinan; Cai, Dongmei; Liu, Huigen

    2017-09-01

    Telescopes with a wide field of view (greater than 1°) and small apertures (less than 2 m) are workhorses for observations such as sky surveys and fast-moving object detection, and play an important role in time-domain astronomy. However, images captured by these telescopes are contaminated by optical system aberrations, atmospheric turbulence, tracking errors and wind shear. To increase the quality of images and maximize their scientific output, we propose a new blind deconvolution algorithm based on statistical properties of the point spread functions (PSFs) of these telescopes. In this new algorithm, we first construct the PSF feature space through principal component analysis, and then classify PSFs from a different position and time using a self-organizing map. According to the classification results, we divide images of the same PSF types and select these PSFs to construct a prior PSF. The prior PSF is then used to restore these images. To investigate the improvement that this algorithm provides for data reduction, we process images of space debris captured by our small-aperture wide-field telescopes. Comparing the reduced results of the original images and the images processed with the standard Richardson-Lucy method, our method shows a promising improvement in astrometry accuracy.

  20. Landcover Based Optimal Deconvolution of PALS L-band Microwave Brightness Temperature

    Science.gov (United States)

    Limaye, Ashutosh S.; Crosson, William L.; Laymon, Charles A.; Njoku, Eni G.

    2004-01-01

    An optimal de-convolution (ODC) technique has been developed to estimate microwave brightness temperatures of agricultural fields using microwave radiometer observations. The technique is applied to airborne measurements taken by the Passive and Active L and S band (PALS) sensor in Iowa during Soil Moisture Experiments in 2002 (SMEX02). Agricultural fields in the study area were predominantly soybeans and corn. The brightness temperatures of corn and soybeans were observed to be significantly different because of large differences in vegetation biomass. PALS observations have significant over-sampling; observations were made about 100 m apart and the sensor footprint extends to about 400 m. Conventionally, observations of this type are averaged to produce smooth spatial data fields of brightness temperatures. However, the conventional approach is in contrast to reality in which the brightness temperatures are in fact strongly dependent on landcover, which is characterized by sharp boundaries. In this study, we mathematically de-convolve the observations into brightness temperature at the field scale (500-800m) using the sensor antenna response function. The result is more accurate spatial representation of field-scale brightness temperatures, which may in turn lead to more accurate soil moisture retrieval.

  1. White matter and visuospatial processing in autism: a constrained spherical deconvolution tractography study.

    Science.gov (United States)

    McGrath, Jane; Johnson, Katherine; O'Hanlon, Erik; Garavan, Hugh; Gallagher, Louise; Leemans, Alexander

    2013-10-01

    Autism spectrum disorders (ASDs) are associated with a marked disturbance of neural functional connectivity, which may arise from disrupted organization of white matter. The aim of this study was to use constrained spherical deconvolution (CSD)-based tractography to isolate and characterize major intrahemispheric white matter tracts that are important in visuospatial processing. CSD-based tractography avoids a number of critical confounds that are associated with diffusion tensor tractography, and to our knowledge, this is the first time that this advanced diffusion tractography method has been used in autism research. Twenty-five participants with ASD and aged 25, intelligence quotient-matched controls completed a high angular resolution diffusion imaging scan. The inferior fronto-occipital fasciculus (IFOF) and arcuate fasciculus were isolated using CSD-based tractography. Quantitative diffusion measures of white matter microstructural organization were compared between groups and associated with visuospatial processing performance. Significant alteration of white matter organization was present in the right IFOF in individuals with ASD. In addition, poorer visuospatial processing was associated in individuals with ASD with disrupted white matter in the right IFOF. Using a novel, advanced tractography method to isolate major intrahemispheric white matter tracts in autism, this research has demonstrated that there are significant alterations in the microstructural organization of white matter in the right IFOF in ASD. This alteration was associated with poorer visuospatial processing performance in the ASD group. This study provides an insight into structural brain abnormalities that may influence atypical visuospatial processing in autism.

  2. In vivo deconvolution acoustic-resolution photoacoustic microscopy in three dimensions.

    Science.gov (United States)

    Cai, De; Li, Zhongfei; Chen, Sung-Liang

    2016-02-01

    Acoustic-resolution photoacoustic microscopy (ARPAM) provides a spatial resolution on the order of tens of micrometers, and is becoming an essential tool for imaging fine structures, such as the subcutaneous microvasculature. High lateral resolution of ARPAM is achieved using high numerical aperture (NA) of acoustic transducer; however, the depth of focus and working distance will be deteriorated correspondingly, thus sacrificing the imaging range and accessible depth. The axial resolution of ARPAM is limited by the transducer's bandwidth. In this work, we develop deconvolution ARPAM (D-ARPAM) in three dimensions that can improve the lateral resolution by 1.8 and 3.7 times and the axial resolution by 1.7 and 2.7 times, depending on the adopted criteria, using a 20-MHz focused transducer without physically increasing its NA and bandwidth. The resolution enhancement in three dimensions by D-ARPAM is also demonstrated by in vivo imaging of the microvasculature of a chick embryo. The proposed D-ARPAM has potential for biomedical imaging that simultaneously requires high spatial resolution, extended imaging range, and long accessible depth.

  3. Digital high-pass deconvolution by means of an infinite impulse response filter

    CERN Document Server

    Födisch, P; Lange, B; Schönherr, J; Enghardt, W; Kaever, P

    2016-01-01

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in the front-end electronics. Thus, the output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, whereas a shortened pulse-width is desirable for high-throughput applications. For both objectives, the digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. Therefore, we investigate different design methods for an IIR filter in the discrete-time domain and verify the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finall...

  4. Multichannel Blind Deconvolution of the Arterial Pressure using Ito Calculus Method

    Directory of Open Access Journals (Sweden)

    M. El-Sayed Waheed

    2015-11-01

    Full Text Available Multichannel Blind Deconvolution (MBD is a powerful tool particularly for the identification and estimation of dynamical systems in which a sensor, for measuring the input, is difficult to place. This paper presents Ito calculus method for the estimation of the unknown time-varying coefficient. The arterial network is modelled as a Finite Impulse Response (FIR filter with unknown coefficients. A new tool for estimation of both the central arterial pressure and the unknown channel dynamics has been developed. The convolution process is modelled as a Finite Impulse Response (FIR filter with unknown coefficients. The source signal is also unknown. Assuming that one of the FIR filter coefficients are time varying, we have been able to get accurate estimation results for the source signal, even though the filter order is unknown. The time varying filter coefficients have been estimated through the SC algorithm, and we have been able to deconvolve the measurements and obtain both the source signal and the convolution path. The positive results demonstrate that the SC approach is superior to conventional methods.

  5. Denoising spectroscopic data by means of the improved Least-Squares Deconvolution method

    CERN Document Server

    Tkachenko, A; Tsymbal, V; Aerts, C; Kochukhov, O; Debosscher, J

    2013-01-01

    The MOST, CoRoT, and Kepler space missions led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions deliver photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Both faintness of most of the observed stars and the required high S/N of spectroscopic data imply the need of using large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique allowing to denoise the originally low S/N spectroscopic data, making observations of faint targets with small telescopes possible and effective. We present a generalization of the original Least-Squares Deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths corre...

  6. Deconvolution of complex differential scanning calorimetry profiles for protein transitions under kinetic control.

    Science.gov (United States)

    Toledo-Núñez, Citlali; Vera-Robles, L Iraís; Arroyo-Maya, Izlia J; Hernández-Arana, Andrés

    2016-09-15

    A frequent outcome in differential scanning calorimetry (DSC) experiments carried out with large proteins is the irreversibility of the observed endothermic effects. In these cases, DSC profiles are analyzed according to methods developed for temperature-induced denaturation transitions occurring under kinetic control. In the one-step irreversible model (native → denatured) the characteristics of the observed single-peaked endotherm depend on the denaturation enthalpy and the temperature dependence of the reaction rate constant, k. Several procedures have been devised to obtain the parameters that determine the variation of k with temperature. Here, we have elaborated on one of these procedures in order to analyze more complex DSC profiles. Synthetic data for a heat capacity curve were generated according to a model with two sequential reactions; the temperature dependence of each of the two rate constants involved was determined, according to the Eyring's equation, by two fixed parameters. It was then shown that our deconvolution procedure, by making use of heat capacity data alone, permits to extract the parameter values that were initially used. Finally, experimental DSC traces showing two and three maxima were analyzed and reproduced with relative success according to two- and four-step sequential models. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. TEMPORAL DECONVOLUTION STUDY OF LONG AND SHORT GAMMA-RAY BURST LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, P. N.; Briggs, Michael S.; Connaughton, Valerie; Paciesas, William; Burgess, Michael; Chaplin, Vandiver; Goldstein, Adam; Guiriec, Sylvain [Center for Space Plasma and Aeronomic Research (CSPAR), University of Alabama in Huntsville, NSSTC, 320 Sparkman Drive, Huntsville, AL 35805 (United States); Kouveliotou, Chryssa; Fishman, Gerald [Space Science Office, VP62, NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States); Van der Horst, Alexander J.; Meegan, Charles A. [Center for Space Plasma and Aeronomic Research (CSPAR), Universities Space Research Association, NSSTC, 320 Sparkman Drive, Huntsville, AL 35805 (United States); Bissaldi, Elisabetta [Institute of Astro and Particle Physics, University of Innsbruck, Technikerstr. 25, 6020 Innsbruck (Austria); Diehl, Roland; Foley, Suzanne; Greiner, Jochen; Gruber, David [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching (Germany); Fitzpatrick, Gerard [School of Physics, University College Dublin, Belfield, Stillorgan Road, Dublin 4 (Ireland); Gibby, Melissa; Giles, Misty M. [Jacobs Technology, Inc., Huntsville, AL 35806 (United States); and others

    2012-01-10

    The light curves of gamma-ray bursts (GRBs) are believed to result from internal shocks reflecting the activity of the GRB central engine. Their temporal deconvolution can reveal potential differences in the properties of the central engines in the two populations of GRBs which are believed to originate from the deaths of massive stars (long) and from mergers of compact objects (short). We present here the results of the temporal analysis of 42 GRBs detected with the Gamma-ray Burst Monitor onboard the Fermi Gamma-ray Space Telescope. We deconvolved the profiles into pulses, which we fit with lognormal functions. The distributions of the pulse shape parameters and intervals between neighboring pulses are distinct for both burst types and also fit with lognormal functions. We have studied the evolution of these parameters in different energy bands and found that they differ between long and short bursts. We discuss the implications of the differences in the temporal properties of long and short bursts within the framework of the internal shock model for GRB prompt emission.

  8. Deconvolution of the particle size distribution of ProRoot MTA and MTA Angelus.

    Science.gov (United States)

    Ha, William Nguyen; Shakibaie, Fardad; Kahler, Bill; Walsh, Laurence James

    2016-01-01

    Objective Mineral trioxide aggregate (MTA) cements contain two types of particles, namely Portland cement (PC) (nominally 80% w/w) and bismuth oxide (BO) (20%). This study aims to determine the particle size distribution (PSD) of PC and BO found in MTA. Materials and methods The PSDs of ProRoot MTA (MTA-P) and MTA Angelus (MTA-A) powder were determined using laser diffraction, and compared to samples of PC (at three different particle sizes) and BO. The non-linear least squares method was used to deconvolute the PSDs into the constituents. MTA-P and MTA-A powders were also assessed with scanning electron microscopy. Results BO showed a near Gaussian distribution for particle size, with a mode distribution peak at 10.48 μm. PC samples milled to differing degrees of fineness had mode distribution peaks from 19.31 down to 4.88 μm. MTA-P had a complex PSD composed of both fine and large PC particles, with BO at an intermediate size, whereas MTA-A had only small BO particles and large PC particles. Conclusions The PSD of MTA cement products is bimodal or more complex, which has implications for understanding how particle size influences the overall properties of the material. Smaller particles may be reactive PC or unreactive radiopaque agent. Manufacturers should disclose particle size information for PC and radiopaque agents to prevent simplistic conclusions being drawn from statements of average particle size for MTA materials.

  9. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Födisch, P., E-mail: p.foedisch@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Wohsmann, J. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Lange, B. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Schönherr, J. [Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Enghardt, W. [OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, PF 41, 01307 Dresden (Germany); Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology, Bautzner Landstr. 400, 01328 Dresden (Germany); German Cancer Consortium (DKTK) and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Kaever, P. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany)

    2016-09-11

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  10. The Matrix Cookbook

    DEFF Research Database (Denmark)

    Petersen, Kaare Brandt; Pedersen, Michael Syskind

    Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....

  11. Matrix with Prescribed Eigenvectors

    Science.gov (United States)

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  12. Heterogeneity of the state and functionality of water molecules sorbed in an amorphous sugar matrix.

    Science.gov (United States)

    Imamura, Koreyoshi; Kagotani, Ryo; Nomura, Mayo; Kinugawa, Kohshi; Nakanishi, Kazuhiro

    2012-04-01

    An amorphous matrix, comprised of sugar molecules, is frequently used in the pharmaceutical industry. An amorphous sugar matrix exhibits high hygroscopicity, and it has been established that the sorbed water lowers the glass transition temperature T(g) of the amorphous sugar matrix. It is naturally expected that the random allocation and configuration of sugar molecules would result in heterogeneity of states for sorbed water. However, most analyses of the behavior of water, when sorbed to an amorphous sugar matrix, have implicitly assumed that all of the sorbed water molecules are in a single state. In this study, the states of water molecules sorbed in an amorphous sugar matrix were analyzed by Fourier-transform IR spectroscopy and a Fourier self-deconvolution technique. When sorbed water molecules were classified into five states, according to the extent to which they are restricted, three of the states resulted in a lowering of T(g) of an amorphous sugar matrix, while the other two were independent of the plasticization of the matrix. This finding provides an explanation for the paradoxical fact that compression at several hundreds of MPa significantly decreases the equilibrium water content at a given RH, while the T(g) remains unchanged.

  13. Set-theoretic deconvolution (STD) for multichromatic ground/air/space-based imagery

    Science.gov (United States)

    Safronov, Aleksandr N.

    1997-09-01

    This paper proposes a class of nonlinear methods, called Set-Theoretic Deconvolution (STD), developed for joint restoration of M(M >1) monochrome distorted 2-dimensional images (snapshots) of an unknown extended object, being viewed through the optical channel with unknown PSF, whose true monochrome brightness profiles look distinct at M(M>!) slightly different wavelengths chosen. The presented method appeals to the generalized Projection Onto Convex Sets (POCS) formalism, so that the proper projective metric is introduced and then minimized. Thus, a number of operators is derived in closed form and cyclically applied to M-dimensional functional vector built up from estimates for combinations of monochrome images. During the projecting of vector onto convex sets one attempts to avoid non-physical inversion and to correctly form a feasible solution (fixed point) consistent with qualitative not quantitative information being assumed to be known in advance. Computer simulation demonstrates that the resulting improved monochrome images reveal fine details which could not easily be discerned in the original distorted images. This technique recovers fairly reliably the total multichromatic 2-D portrait of an arbitrary compact object whose monochrome brightness distributions have discontinuities and are highly nonconvex plus multiply connected ones. Originally developed for the deblurring of passively observed objects, the STD approach can be carried over to scenario with actively irradiated objects (f.e., near-Earth space targets). Under advanced conditions, such as spatio-spectrally diversified laser illumination or coherent Doppler imaging implementation, the synthesized loop deconvolver could be universal tool in object feature extraction by means of occasionally aberrated space-borne telescope or turbulence-affected ground/air-based large aperture optical systems.

  14. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  15. The raman spectrum of biosynthetic human growth hormone. Its deconvolution, bandfitting, and interpretation

    Science.gov (United States)

    Tensmeyer, Lowell G.

    1988-05-01

    The Raman spectrum of amorphous biosynthetic human growth hormone, somatotropin, has been measured at high signal-to-noise ratios, using a CW argon ion laser and single channel detection. The rms signal-to-noise ratio varies from 1800:1 in the Amide I region near 1650 cm -1 region, to 500:1 in the disulfide stretch region near 500 cm -1. Component Raman bands have been extracted from the entire spectral envelope from 1800-400 cm -1, by an interactive process involving both partial deconvolution and band-fitting. Interconsistency of all bands has been achieved by multiple overlapping of adjacent regions that had been isolated for the band-fitting programs. The resulting areas of the Raman component bands have been interpreted to show the ratios of peptide conformations in the hormone: 64% α-helix, 24% β-sheet, 8% β-turns and 4% γ-turns. Analysis of the tyrosine region, usually described as a Fermi resonance doublet near ˜830-850 cm -1, shows four bands, at 825, 833, 853, and 859 cm -1 in this macromolecule. Integrated intensities of these bands (2:2:2:2) are interpreted to show that only half of the eight tyrosine residues function as hydrogen-bond bridges via the acceptance of protons. Both disulfide bridges fall within the frequency ranges for normal, unstressed SS bonds: The 511 and 529 cm -1 bands are indicative of the gauche-gauche-gauche and trans-gauche-gauche conformations, respectively.

  16. Information theoretical methods to deconvolute genetic regulatory networks applied to thyroid neoplasms

    Science.gov (United States)

    Hernández-Lemus, Enrique; Velázquez-Fernández, David; Estrada-Gil, Jesús K.; Silva-Zolezzi, Irma; Herrera-Hernández, Miguel F.; Jiménez-Sánchez, Gerardo

    2009-12-01

    Most common pathologies in humans are not caused by the mutation of a single gene, rather they are complex diseases that arise due to the dynamic interaction of many genes and environmental factors. This plethora of interacting genes generates a complexity landscape that masks the real effects associated with the disease. To construct dynamic maps of gene interactions (also called genetic regulatory networks) we need to understand the interplay between thousands of genes. Several issues arise in the analysis of experimental data related to gene function: on the one hand, the nature of measurement processes generates highly noisy signals; on the other hand, there are far more variables involved (number of genes and interactions among them) than experimental samples. Another source of complexity is the highly nonlinear character of the underlying biochemical dynamics. To overcome some of these limitations, we generated an optimized method based on the implementation of a Maximum Entropy Formalism (MaxEnt) to deconvolute a genetic regulatory network based on the most probable meta-distribution of gene-gene interactions. We tested the methodology using experimental data for Papillary Thyroid Cancer (PTC) and Thyroid Goiter tissue samples. The optimal MaxEnt regulatory network was obtained from a pool of 25,593,993 different probability distributions. The group of observed interactions was validated by several (mostly in silico) means and sources. For the associated Papillary Thyroid Cancer Gene Regulatory Network (PTC-GRN) the majority of the nodes (genes) have very few links (interactions) whereas a small number of nodes are highly connected. PTC-GRN is also characterized by high clustering coefficients and network heterogeneity. These properties have been recognized as characteristic of topological robustness, and they have been largely described in relation to biological networks. A number of biological validity outcomes are discussed with regard to both the

  17. Developing terahertz imaging equation and enhancement of the resolution of terahertz images using deconvolution

    Science.gov (United States)

    Ahi, Kiarash; Anwar, Mehdi

    2016-04-01

    This paper introduces a novel reconstruction approach for enhancing the resolution of the terahertz (THz) images. For this purpose the THz imaging equation is derived. According to our best knowledge we are reporting the first THz imaging equation by this paper. This imaging equation is universal for THz far-field imaging systems and can be used for analyzing, describing and modeling of these systems. The geometry and behavior of Gaussian beams in far-field region imply that the FWHM of the THz beams diverge as the frequencies of the beams decrease. Thus, the resolution of the measurement decreases in lower frequencies. On the other hand, the depth of penetration of THz beams decreases as frequency increases. Roughly speaking beams in sub 1.5 THz, are transmitted into integrated circuit (IC) packages and the similar packaged objects. Thus, it is not possible to use the THz pulse with higher frequencies in order to achieve higher resolution inspection of packaged items. In this paper, after developing the 3-D THz point spread function (PSF) of the scanning THz beam and then the THz imaging equation, THz images are enhanced through deconvolution of the THz PSF and THz images. As a result, the resolution has been improved several times beyond the physical limitations of the THz measurement setup in the far-field region and sub-Nyquist images have been achieved. Particularly, MSE and SSIḾ have been increased by 27% and 50% respectively. Details as small as 0.2 mm were made visible in the THz images which originally reveals no details smaller than 2.2 mm. In other words the resolution of the images has been increased by 10 times. The accuracy of the reconstructed images was proved by high resolution X-ray images.

  18. Improved sensitivity to cerebral white matter abnormalities in Alzheimer's disease with spherical deconvolution based tractography.

    Directory of Open Access Journals (Sweden)

    Yael D Reijmer

    Full Text Available Diffusion tensor imaging (DTI based fiber tractography (FT is the most popular approach for investigating white matter tracts in vivo, despite its inability to reconstruct fiber pathways in regions with "crossing fibers." Recently, constrained spherical deconvolution (CSD has been developed to mitigate the adverse effects of "crossing fibers" on DTI based FT. Notwithstanding the methodological benefit, the clinical relevance of CSD based FT for the assessment of white matter abnormalities remains unclear. In this work, we evaluated the applicability of a hybrid framework, in which CSD based FT is combined with conventional DTI metrics to assess white matter abnormalities in 25 patients with early Alzheimer's disease. Both CSD and DTI based FT were used to reconstruct two white matter tracts: one with regions of "crossing fibers," i.e., the superior longitudinal fasciculus (SLF and one which contains only one fiber orientation, i.e. the midsagittal section of the corpus callosum (CC. The DTI metrics, fractional anisotropy (FA and mean diffusivity (MD, obtained from these tracts were related to memory function. Our results show that in the tract with "crossing fibers" the relation between FA/MD and memory was stronger with CSD than with DTI based FT. By contrast, in the fiber bundle where one fiber population predominates, the relation between FA/MD and memory was comparable between both tractography methods. Importantly, these associations were most pronounced after adjustment for the planar diffusion coefficient, a measure reflecting the degree of fiber organization complexity. These findings indicate that compared to conventionally applied DTI based FT, CSD based FT combined with DTI metrics can increase the sensitivity to detect functionally significant white matter abnormalities in tracts with complex white matter architecture.

  19. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    Directory of Open Access Journals (Sweden)

    Roerdink Jos BTM

    2008-04-01

    Full Text Available Abstract Background We present a simple, data-driven method to extract haemodynamic response functions (HRF from functional magnetic resonance imaging (fMRI time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs. Results ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling. The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series. Conclusion With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.

  20. Deconvolution analyses with tent functions reveal delayed and long-sustained increases of BOLD signals with acupuncture stimulation.

    Science.gov (United States)

    Murase, Tomokazu; Umeda, Masahiro; Fukunaga, Masaki; Tanaka, Chuzo; Higuchi, Toshihiro

    2013-01-01

    We used deconvolution analysis to examine temporal changes in brain activity after acupuncture stimulation and assess brain responses without expected reference functions. We also examined temporal changes in brain activity after sham acupuncture (noninsertive) and scrubbing stimulation. We divided 26 healthy right-handed adults into a group of 13 who received real acupuncture with manual manipulation and a group of 13 who received both tactical stimulations. Functional magnetic resonance imaging (fMRI) sequences consisted of four 15-s stimulation blocks (ON) interspersed between one 30-s and four 45-s rest blocks (OFF) for a total scanning time of 270 s. We analyzed data by using Statistical Parametric Mapping 8 (SPM8), MarsBaR, and Analysis of Functional NeuroImages (AFNI) software. For statistical analysis, we used 3dDeconvolve, part of the AFNI package, to extract the impulse response functions (IRFs) of the fMRI signals on a voxel-wise basis, and we tested the time courses of the extracted IRFs for the stimulations. We found stimulus-specific impulse responses of blood oxygen level-dependent (BOLD) signals in various brain regions. We observed significantly delayed and long-sustained increases of BOLD signals in several brain regions following real acupuncture compared to sham acupuncture and palm scrubbing, which we attribute to peripheral nocireceptors, flare responses, and processing of the central nervous system. Acupuncture stimulation induced continued activity that was stronger than activity after the other stimulations. We used tent function deconvolution to process fMRI data for acupuncture stimulation and found delayed increasing and delayed decreasing changes in BOLD signal in the somatosensory areas and areas related to pain perception. Deconvolution analyses with tent functions are expected to be useful in extracting complicated and associated brain activity that is delayed and sustained for a long period after various stimulations.

  1. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  2. Gearbox fault diagnosis of rolling mills using multiwavelet sliding window neighboring coefficient denoising and optimal blind deconvolution

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Fault diagnosis of rolling mills, especially the main drive gearbox, is of great importance to the high quality products and long-term safe operation. However, the useful fault information is usually submerged in heavy background noise under the severe condition. Thereby, a novel method based on multiwavelet sliding window neighboring coefficient denoising and optimal blind deconvolution is proposed for gearbox fault diagnosis in rolling mills. The emerging multiwavelets can seize the important signal processing properties simultaneously. Owing to the multiple scaling and wavelet basis functions, they have the supreme possibility of matching various features. Due to the periodicity of gearbox signals, sliding window is recommended to conduct local threshold denoising, so as to avoid the "overkill" of conventional universal thresholding techniques. Meanwhile, neighboring coefficient denoising, considering the correlation of the coefficients, is introduced to effectively process the noisy signals in every sliding window. Thus, multiwavelet sliding window neighboring coefficient denoising not only can perform excellent fault extraction, but also accords with the essence of gearbox fault features. On the other hand, optimal blind deconvolution is carried out to highlight the denoised features for operators’ easy identification. The filter length is vital for the effective and meaningful results. Hence, the foremost filter length selection based on the kurtosis is discussed in order to full benefits of this technique. The new method is applied to two gearbox fault diagnostic cases of hot strip finishing mills, compared with multiwavelet and scalar wavelet methods with/without optimal blind deconvolution. The results show that it could enhance the ability of fault detection for the main drive gearboxes.

  3. Application of maximum-entropy spectral estimation to deconvolution of XPS data. [X-ray Photoelectron Spectroscopy

    Science.gov (United States)

    Vasquez, R. P.; Klein, J. D.; Barton, J. J.; Grunthaner, F. J.

    1981-01-01

    A comparison is made between maximum-entropy spectral estimation and traditional methods of deconvolution used in electron spectroscopy. The maximum-entropy method is found to have higher resolution-enhancement capabilities and, if the broadening function is known, can be used with no adjustable parameters with a high degree of reliability. The method and its use in practice are briefly described, and a criterion is given for choosing the optimal order for the prediction filter based on the prediction-error power sequence. The method is demonstrated on a test case and applied to X-ray photoelectron spectra.

  4. Blind deconvolution of images with model discrepancies using maximum a posteriori estimation with heavy-tailed priors

    Science.gov (United States)

    Kotera, Jan; Å roubek, Filip

    2015-02-01

    Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. Extensions Of The Method Of Moments For Deconvolution Of Experimental Data

    Science.gov (United States)

    Small, Enoch W.; Libertini, Louis J.; Brown, David W.; Small, Jeanne R.

    1989-05-01

    The Method of Moments is one of a series of closely related transform methods which have been developed primarily for the deconvolution and analysis of fluorescence decay data. The main distinguishing feature of the Method of Moments is that it has been designed to be robust with respect to several important nonrandom errors of instrumental origin. The historical development of the method is reviewed here. Several new extensions are also described, including a statistical theory, an improved global analysis, and a method for analyzing continuous distributions of lifetimes. The new statistical theory is the first to incorporate a combined treatment of exponential depression and moment index displacement, both necessary components of the Method of Moments. In comparisons with the more commonly used least squares iterative reconvolution (LSIR) approach, it is shown that, in analyses of ideal synthetic data with random noise, the Method of Moments gives deviations in recovered parameters which are slightly greater but essentially comparable to those found by the data fitting method. Real experimental data also contain nonrandom errors. In the presence of certain such errors, decay parameters recovered by the Method of Moments will be unaffected, whereas the least squares method may yield incorrect results, unless care is taken to fit all of the data errors. An example of the improved global analysis application of the Method of Moments is shown in which two rhodamine dyes with very close lifetimes are distinguished based on spectral data. Also, the use of the distribution analysis method is illustrated with the binding of the intercalating dye ethidium bromide to DNA and nucleosome core particles. At very low ionic strength the width and location of the lifetime distribution shows a time dependence, indicating time-dependent changes in the environment of the probe. Finally, examples of Method of Moments analyses are shown for a totally different kind of data

  7. Denoising spectroscopic data by means of the improved least-squares deconvolution method

    Science.gov (United States)

    Tkachenko, A.; Van Reeth, T.; Tsymbal, V.; Aerts, C.; Kochukhov, O.; Debosscher, J.

    2013-12-01

    Context. The MOST, CoRoT, and Kepler space missions have led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions have delivered photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Aims: The faintness of most of the observed stars and the required high signal-to-noise ratio (S/N) of spectroscopic data both imply the need to use large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique that allows the denoising of the originally low S/N (typically, below 80) spectroscopic data, making observations of faint targets with small telescopes possible and effective. Methods: We present a generalization of the original least-squares deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths correction algorithm. We tested the method on simulated and real spectra of single and binary stars, among which are two intrinsically variable objects. Results: The method was successfully tested on the high-resolution spectra of Vega and a Kepler star, KIC 04749989. Application to the two pulsating stars, 20 Cvn and HD 189631, showed that the technique is also applicable to intrinsically variable stars: the results of frequency analysis and mode identification from the LSD model spectra for both objects are in good agreement with the findings from literature. Depending on the S/N of the original data and spectral characteristics of a star, the gain in S/N in the LSD model spectrum typically ranges from 5 to 15 times. Conclusions: The technique introduced in this paper allows an effective denoising of the originally low S/N spectroscopic data. The high S/N spectra obtained

  8. A deconvolution technique to correct deep images of galaxies from instrumental scattered light

    Science.gov (United States)

    Karabal, E.; Duc, P.-A.; Kuntschner, H.; Chanial, P.; Cuillandre, J.-C.; Gwyn, S.

    2017-05-01

    Deep imaging of the diffuse light that is emitted by stellar fine structures and outer halos around galaxies is often now used to probe their past mass assembly. Because the extended halos survive longer than the relatively fragile tidal features, they trace more ancient mergers. We use images that reach surface brightness limits as low as 28.5-29 mag arcsec-2 (g-band) to obtain light and color profiles up to 5-10 effective radii of a sample of nearby early-type galaxies. These were acquired with MegaCam as part of the CFHT MATLAS large programme. These profiles may be compared to those produced using simulations of galaxy formation and evolution, once corrected for instrumental effects. Indeed they can be heavily contaminated by the scattered light caused by internal reflections within the instrument. In particular, the nucleus of galaxies generates artificial flux in the outer halo, which has to be precisely subtracted. We present a deconvolution technique to remove the artificial halos that makes use of very large kernels. The technique, which is based on PyOperators, is more time efficient than the model-convolution methods that are also used for that purpose. This is especially the case for galaxies with complex structures that are hard to model. Having a good knowledge of the point spread function (PSF), including its outer wings, is critical for the method. A database of MegaCam PSF models corresponding to different seeing conditions and bands was generated directly from the deep images. We show that the difference in the PSFs in different bands causes artificial changes in the color profiles, in particular a reddening of the outskirts of galaxies having a bright nucleus. The method is validated with a set of simulated images and applied to three representative test cases: NGC 3599, NGC 3489, and NGC 4274, which exhibits a prominent ghost halo for two of them. This method successfully removes this. The library of PSFs (FITS files) is only available at the

  9. Estimating High Frequency Energy Radiation of Large Earthquakes by Image Deconvolution Back-Projection

    Science.gov (United States)

    Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim

    2017-04-01

    With the recent establishment of regional dense seismic arrays (e.g., Hi-net in Japan, USArray in the North America), advanced digital data processing has enabled improvement of back-projection methods that have become popular and are widely used to track the rupture process of moderate to large earthquakes. Back-projection methods can be classified into two groups, one using time domain analyses, and the other frequency domain analyses. There are minor technique differences in both groups. Here we focus on the back-projection performed in the time domain using seismic waveforms recorded at teleseismic distances (30-90 degree). For the standard back-projection (Ishii et al., 2005), teleseismic P waves that are recorded on vertical components of a dense seismic array are analyzed. Since seismic arrays have limited resolutions and we make several assumptions (e.g., only direct P waves at the observed waveforms, and every trace has completely identical waveform), the final images from back-projections show the stacked amplitudes (or correlation coefficients) that are often smeared in both time and space domains. Although it might not be difficult to reveal overall source processes for a giant seismic source such as the 2004 Mw 9.0 Sumatra earthquake where the source extent is about 1400 km (Ishii et al., 2005; Krüger and Ohrnberger, 2005), there are more problems in imaging detailed processes of earthquakes with smaller source dimensions, such as a M 7.5 earthquake with a source extent of 100-150 km. For smaller earthquakes, it is more difficult to resolve space distributions of the radiated energies. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to determine the sources of high frequency energy radiation by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a

  10. Phosphine in various matrixes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Matrix-bound phosphine was determined in the Jiaozhou Bay coastal sediment, in prawn-pond bottom soil, in the eutrophic lake Wulongtan, in the sewage sludge and in paddy soil as well. Results showed that matrix-bound phosphine levels in freshwater and coastal sediment, as well as in sewage sludge, are significantly higher than that in paddy soil. The correlation between matrix bound phosphine concentrations and organic phosphorus contents in sediment samples is discussed.

  11. Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-Forming Regions

    CERN Document Server

    Roy, Arabindo; Bock, James J; Brunt, Christopher M; Chapin, Edward L; Devlin, Mark J; Dicker, Simon R; France, Kevin; Gibb, Andrew G; Griffin, Matthew; Gundersen, Joshua O; Halpern, Mark; Hargrave, Peter C; Hughes, David H; Klein, Jeff; Marsden, Gaelen; Martin, Peter G; Mauskopf, Philip; Netterfield, Calvin B; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D P; Tucker, Carole; Tucker, Gregory S; Viero, Marco P; Wiebe, Donald V

    2010-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available highresolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the Spectral Energy Distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationsh...

  12. A Practical Deconvolution Computation Algorithm to Extract 1D Spectra from 2D Images of Optical Fiber Spectroscopy

    CERN Document Server

    Li, Guangwei; Bai, Zhongrui

    2015-01-01

    Bolton and Schlegel presented a promising deconvolution method to extract 1D spectra from a 2D optical fiber spectral CCD image. The method could eliminate the PSF difference between fibers, extract spectra to the photo noise level, as well as improve the resolution. But the method is limited by its huge computation requirement and thus cannot be implemented in actual data reduction. In this article, we develop a practical computation method to solve the computation problem. The new computation method can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. Our method does not require large amounts of memory and can extract a 4k multi 4k noise-free CCD image with 250 fibers in 2 hr. To make our method more practical, we further consider the influence of noise, which is thought to be an intrinsic illposed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. Compared with the results of tra...

  13. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools.

    Science.gov (United States)

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V; Hernandez, Felix; de Voogt, Pim

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds.

  14. Application of speckle and (multi-object) multi-frame blind deconvolution techniques on imaging and imaging spectropolarimetric data

    CERN Document Server

    Puschmann, K G

    2011-01-01

    We test the effects of reconstruction techniques on 2D data to determine the best approach. We obtained a time-series of spectropolarimetric data in the Fe I line at 630.25 nm with the Goettingen Fabry-Perot Interferometer (FPI) that are accompanied by imaging data at 431.3 nm and Ca II H. We apply both speckle and (MO)MFBD techniques. We compare the spatial resolution and investigate the impact of the reconstruction on spectral characteristics. The speckle reconstruction and MFBD perform similar for our imaging data with nearly identical intensity contrasts. MFBD provides a better and more homogeneous resolution at the shortest wavelength. The MOMFBD and speckle deconvolution of the intensity spectra lead to similar results, but our choice of settings for the MOMFBD yields an intensity contrast smaller by about 2% at a comparable spatial resolution. None of the reconstruction techniques introduces artifacts in the intensity spectra. The speckle deconvolution (MOMFBD) has a rms noise in V/I of 0.32% (0.20%). ...

  15. Bayesian deconvolution of scanning electron microscopy images using point-spread function estimation and non-local regularization.

    Science.gov (United States)

    Roels, Joris; Aelterman, Jan; De Vylder, Jonas; Hiep Luong; Saeys, Yvan; Philips, Wilfried

    2016-08-01

    Microscopy is one of the most essential imaging techniques in life sciences. High-quality images are required in order to solve (potentially life-saving) biomedical research problems. Many microscopy techniques do not achieve sufficient resolution for these purposes, being limited by physical diffraction and hardware deficiencies. Electron microscopy addresses optical diffraction by measuring emitted or transmitted electrons instead of photons, yielding nanometer resolution. Despite pushing back the diffraction limit, blur should still be taken into account because of practical hardware imperfections and remaining electron diffraction. Deconvolution algorithms can remove some of the blur in post-processing but they depend on knowledge of the point-spread function (PSF) and should accurately regularize noise. Any errors in the estimated PSF or noise model will reduce their effectiveness. This paper proposes a new procedure to estimate the lateral component of the point spread function of a 3D scanning electron microscope more accurately. We also propose a Bayesian maximum a posteriori deconvolution algorithm with a non-local image prior which employs this PSF estimate and previously developed noise statistics. We demonstrate visual quality improvements and show that applying our method improves the quality of subsequent segmentation steps.

  16. Identification and deconvolution of cross-resistance signals from antimalarial compounds using multidrug-resistant Plasmodium falciparum strains.

    Science.gov (United States)

    Chugh, Monika; Scheurer, Christian; Sax, Sibylle; Bilsland, Elizabeth; van Schalkwyk, Donelly A; Wicht, Kathryn J; Hofmann, Natalie; Sharma, Anil; Bashyam, Sridevi; Singh, Shivendra; Oliver, Stephen G; Egan, Timothy J; Malhotra, Pawan; Sutherland, Colin J; Beck, Hans-Peter; Wittlin, Sergio; Spangenberg, Thomas; Ding, Xavier C

    2015-02-01

    Plasmodium falciparum, the most deadly agent of malaria, displays a wide variety of resistance mechanisms in the field. The ability of antimalarial compounds in development to overcome these must therefore be carefully evaluated to ensure uncompromised activity against real-life parasites. We report here on the selection and phenotypic as well as genotypic characterization of a panel of sensitive and multidrug-resistant P. falciparum strains that can be used to optimally identify and deconvolute the cross-resistance signals from an extended panel of investigational antimalarials. As a case study, the effectiveness of the selected panel of strains was demonstrated using the 1,2,4-oxadiazole series, a newly identified antimalarial series of compounds with in vitro activity against P. falciparum at nanomolar concentrations. This series of compounds was to be found inactive against several multidrug-resistant strains, and the deconvolution of this signal implicated pfcrt, the genetic determinant of chloroquine resistance. Targeted mode-of-action studies further suggested that this new chemical series might act as falcipain 2 inhibitors, substantiating the suggestion that these compounds have a site of action similar to that of chloroquine but a distinct mode of action. New antimalarials must overcome existing resistance and, ideally, prevent its de novo appearance. The panel of strains reported here, which includes recently collected as well as standard laboratory-adapted field isolates, is able to efficiently detect and precisely characterize cross-resistance and, as such, can contribute to the faster development of new, effective antimalarial drugs.

  17. Patience of matrix games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.;

    2013-01-01

    For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...

  18. Patience of matrix games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.

    2013-01-01

    For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...

  19. Transfer function matrix

    Science.gov (United States)

    Seraji, H.

    1987-01-01

    Given a multivariable system, it is proved that the numerator matrix N(s) of the transfer function evaluated at any system pole either has unity rank or is a null matrix. It is also shown that N(s) evaluated at any transmission zero of the system has rank deficiency. Examples are given for illustration.

  20. An automated technique for detailed ?-FTIR mapping of diamond and spectral deconvolution

    Science.gov (United States)

    Howell, Dan; Griffin, Bill; O'Neill, Craig; O'Reilly, Suzanne; Pearson, Norman; Handley, Heather

    2010-05-01

    other commonly found defects and impurities. Whether these are intrinsic defects like platelets, extrinsic defects like hydrogen or boron atoms, or inclusions of minerals or fluids. Recent technological developments in the field of spectroscopy allow detailed μ-FTIR analysis to be performed rapidly in an automated fashion. The Nicolet iN10 microscope has an integrated design that maximises signal throughput and allows spectra to be collected with greater efficiency than is possible with conventional μ-FTIR spectrometer-microscope systems. Combining this with a computer controlled x-y stage allows for the automated measuring of several thousand spectra in only a few hours. This affords us the ability to record 2D IR maps of diamond plates with minimal effort, but has created the need for an automated technique to process the large quantities of IR spectra and obtain quantitative data from them. We will present new software routines that can process large batches of IR spectra, including baselining, conversion to absorption coefficient, and deconvolution to identify and quantify the various nitrogen components. Possible sources of error in each step of the process will be highlighted so that the data produced can be critically assessed. The end result will be the production of various false colour 2D maps that show the distribution of nitrogen concentrations and aggregation states, as well as other identifiable components.

  1. Improvement in White Matter Tract Reconstruction with Constrained Spherical Deconvolution and Track Density Mapping in Low Angular Resolution Data: A Pediatric Study and Literature Review

    Directory of Open Access Journals (Sweden)

    Benedetta Toselli

    2017-08-01

    Full Text Available IntroductionDiffusion-weighted magnetic resonance imaging (DW-MRI allows noninvasive investigation of brain structure in vivo. Diffusion tensor imaging (DTI is a frequently used application of DW-MRI that assumes a single main diffusion direction per voxel, and is therefore not well suited for reconstructing crossing fiber tracts. Among the solutions developed to overcome this problem, constrained spherical deconvolution with probabilistic tractography (CSD-PT has provided superior quality results in clinical settings on adult subjects; however, it requires particular acquisition parameters and long sequences, which may limit clinical usage in the pediatric age group. The aim of this work was to compare the results of DTI with those of track density imaging (TDI maps and CSD-PT on data from neonates and children, acquired with low angular resolution and low b-value diffusion sequences commonly used in pediatric clinical MRI examinations.Materials and methodsWe analyzed DW-MRI studies of 50 children (eight neonates aged 3–28 days, 20 infants aged 1–8 months, and 22 children aged 2–17 years acquired on a 1.5 T Philips scanner using 34 gradient directions and a b-value of 1,000 s/mm2. Other sequence parameters included 60 axial slices; acquisition matrix, 128 × 128; average scan time, 5:34 min; voxel size, 1.75 mm × 1.75 mm × 2 mm; one b = 0 image. For each subject, we computed principal eigenvector (EV maps and directionally encoded color TDI maps (DEC-TDI maps from whole-brain tractograms obtained with CSD-PT; the cerebellar-thalamic, corticopontocerebellar, and corticospinal tracts were reconstructed using both CSD-PT and DTI. Results were compared by two neuroradiologists using a 5-point qualitative score.ResultsThe DEC-TDI maps obtained presented higher anatomical detail than EV maps, as assessed by visual inspection. In all subjects, white matter (WM tracts were successfully reconstructed using both

  2. Improved Matrix Uncertainty Selector

    CERN Document Server

    Rosenbaum, Mathieu

    2011-01-01

    We consider the regression model with observation error in the design: y=X\\theta* + e, Z=X+N. Here the random vector y in R^n and the random n*p matrix Z are observed, the n*p matrix X is unknown, N is an n*p random noise matrix, e in R^n is a random noise vector, and \\theta* is a vector of unknown parameters to be estimated. We consider the setting where the dimension p can be much larger than the sample size n and \\theta* is sparse. Because of the presence of the noise matrix N, the commonly used Lasso and Dantzig selector are unstable. An alternative procedure called the Matrix Uncertainty (MU) selector has been proposed in Rosenbaum and Tsybakov (2010) in order to account for the noise. The properties of the MU selector have been studied in Rosenbaum and Tsybakov (2010) for sparse \\theta* under the assumption that the noise matrix N is deterministic and its values are small. In this paper, we propose a modification of the MU selector when N is a random matrix with zero-mean entries having the variances th...

  3. Elementary matrix theory

    CERN Document Server

    Eves, Howard

    1980-01-01

    The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri

  4. Rheocasting Al matrix composites

    Energy Technology Data Exchange (ETDEWEB)

    Girot, F.A.; Albingre, L.; Quenisset, J.M.; Naslain, R.

    1987-11-01

    A development status account is given for the rheocasting method of Al-alloy matrix/SiC-whisker composites, which involves the incorporation and homogeneous distribution of 8-15 vol pct of whiskers through the stirring of the semisolid matrix melt while retaining sufficient fluidity for casting. Both 1-, 3-, and 6-mm fibers of Nicalon SiC and and SiC whisker reinforcements have been experimentally investigated, with attention to the characterization of the resulting microstructures and the effects of fiber-matrix interactions. A thin silica layer is found at the whisker surface. 7 references.

  5. Mueller matrix differential decomposition.

    Science.gov (United States)

    Ortega-Quijano, Noé; Arce-Diego, José Luis

    2011-05-15

    We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. © 2011 Optical Society of America

  6. Effect of confounding variables on hemodynamic response function estimation using averaging and deconvolution analysis: An event-related NIRS study.

    Science.gov (United States)

    Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice

    2017-07-15

    Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of

  7. Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of Trinitrotoluene in Mixtures of Other Nitroaromatic Compounds

    Science.gov (United States)

    2014-02-24

    ABSTRACT Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of...Unclassified Unlimited Unclassified Unlimited 13 Braden C. Giordano (202) 404-6320 Micellar electrokinetic chromatography Nitroaromatic explosives...Capillary electrophoresis DNT – Dinitrotoluene EOF – Electroosmotic flow MEKC – Micellar electrokinetic chromatography PLS – Partial least squares regression TNT – Trinitrotoluene 11

  8. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  9. Pesticide-Exposure Matrix

    Science.gov (United States)

    The "Pesticide-exposure Matrix" was developed to help epidemiologists and other researchers identify the active ingredients to which people were likely exposed when their homes and gardens were treated for pests in past years.

  10. Matrix theory of gravitation

    CERN Document Server

    Koehler, Wolfgang

    2011-01-01

    A new classical theory of gravitation within the framework of general relativity is presented. It is based on a matrix formulation of four-dimensional Riemann-spaces and uses no artificial fields or adjustable parameters. The geometrical stress-energy tensor is derived from a matrix-trace Lagrangian, which is not equivalent to the curvature scalar R. To enable a direct comparison with the Einstein-theory a tetrad formalism is utilized, which shows similarities to teleparallel gravitation theories, but uses complex tetrads. Matrix theory might solve a 27-year-old, fundamental problem of those theories (sec. 4.1). For the standard test cases (PPN scheme, Schwarzschild-solution) no differences to the Einstein-theory are found. However, the matrix theory exhibits novel, interesting vacuum solutions.

  11. Matrix comparison, Part 2

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg; Borlund, Pia

    2007-01-01

    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing...... such comparisons, matrix generation, and the composition of proximity measures, are introduced and discussed. In this second part, the authors introduce and thoroughly demonstrate two related matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare...... important. Alternatively, or as a supplement, Procrustes analysis compares the actual ordination results without investigating the underlying proximity measures, by matching two configurations of the same objects in a multidimensional space. An advantage of the Procrustes analysis though, is the graphical...

  12. The Matrix Organization Revisited

    DEFF Research Database (Denmark)

    Gattiker, Urs E.; Ulhøi, John Parm

    1999-01-01

    This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....

  13. Optical Coherency Matrix Tomography

    Science.gov (United States)

    2015-10-19

    optics has been studied theoretically11, but has not been demonstrated experimentally heretofore. Even in the simplest case of two binary DoFs6 (e.g...coherency matrix G spanning these DoFs. This optical coherency matrix has not been measured in its entirety to date—even in the simplest case of two...dense coding, etc. CREOL, The College of Optics & Photonics, University of Central Florida, Orlando , Florida 32816, USA. Correspondence and requests

  14. Matrix fractional systems

    Science.gov (United States)

    Tenreiro Machado, J. A.

    2015-08-01

    This paper addresses the matrix representation of dynamical systems in the perspective of fractional calculus. Fractional elements and fractional systems are interpreted under the light of the classical Cole-Cole, Davidson-Cole, and Havriliak-Negami heuristic models. Numerical simulations for an electrical circuit enlighten the results for matrix based models and high fractional orders. The conclusions clarify the distinction between fractional elements and fractional systems.

  15. Hacking the Matrix.

    Science.gov (United States)

    Czerwinski, Michael; Spence, Jason R

    2017-01-05

    Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The Matrix Organization Revisited

    DEFF Research Database (Denmark)

    Gattiker, Urs E.; Ulhøi, John Parm

    1999-01-01

    This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....

  17. Subcellular Microanatomy by 3D Deconvolution Brightfield Microscopy: Method and Analysis Using Human Chromatin in the Interphase Nucleus

    Directory of Open Access Journals (Sweden)

    Paul Joseph Tadrous

    2012-01-01

    Full Text Available Anatomy has advanced using 3-dimensional (3D studies at macroscopic (e.g., dissection, injection moulding of vessels, radiology and microscopic (e.g., serial section reconstruction with light and electron microscopy levels. This paper presents the first results in human cells of a new method of subcellular 3D brightfield microscopy. Unlike traditional 3D deconvolution and confocal techniques, this method is suitable for general application to brightfield microscopy. Unlike brightfield serial sectioning it has subcellular resolution. Results are presented of the 3D structure of chromatin in the interphase nucleus of two human cell types, hepatocyte and plasma cell. I show how the freedom to examine these structures in 3D allows greater morphological discrimination between and within cell types and the 3D structural basis for the classical “clock-face” motif of the plasma cell nucleus is revealed. Potential for further applications discussed.

  18. Multi-channel blind deconvolution algorithm for multiple-input multiple-output DS/CDMA system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Direct sequence spread spectrum transmission can be realized at low SNR,and has low probability of detection.It is aly problem how to obtain the original users'signal in a non-cooperative context.In practicality,the DS/CDMA sources received are linear convolute mixing.A more complex multichannel blind deconvolution MBD algorithm is required to achieve better source separation.An improved MBD algorithm for separating linear convolved mixtures of signals in CDMA system is proposed.This algorithm is based on minimizing the average squared cross-output-channel-correlation.The mixture coefficients are totally unknown.while some knowledge about temporal model exists.Results show that the proposed algorithm Can bring about the exactness and low computational complexity.

  19. The mathematics of a successful deconvolution: a quantitative assessment of mixture-based combinatorial libraries screened against two formylpeptide receptors.

    Science.gov (United States)

    Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia

    2013-05-30

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  20. The Mathematics of a Successful Deconvolution: A Quantitative Assessment of Mixture-Based Combinatorial Libraries Screened Against Two Formylpeptide Receptors

    Directory of Open Access Journals (Sweden)

    Richard A. Houghten

    2013-05-01

    Full Text Available In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  1. Thorium concentrations in the lunar surface. II - Deconvolution modeling and its application to the regions of Aristarchus and Mare Smythii

    Science.gov (United States)

    Haines, E. L.; Etchegaray-Ramirez, M. I.; Metzger, A. E.

    1978-01-01

    The broad angular response which characterized the Apollo gamma ray spectrometer resulted in a loss of spatial resolution and some of the contrast in determining surface concentrations within lunar regions small compared to the field of view. A deconvolution technique has been developed which removes much of this instrumental effect, thereby improving both spatial resolution and accuracy at the cost of a loss in precision. Geometric models of regional thorium distribution are convoluted through the response function of the instrument to yield a predicted distribution which is compared with the observed data field for quality of fit. Application to areas which include Aristarchus and Mare Smythii confirm some geological relationships and fail to support others.

  2. A simple ligation-based method to increase the information density in sequencing reactions used to deconvolute nucleic acid selections

    Science.gov (United States)

    Childs-Disney, Jessica L.; Disney, Matthew D.

    2008-01-01

    Herein, a method is described to increase the information density of sequencing experiments used to deconvolute nucleic acid selections. The method is facile and should be applicable to any selection experiment. A critical feature of this method is the use of biotinylated primers to amplify and encode a BamHI restriction site on both ends of a PCR product. After amplification, the PCR reaction is captured onto streptavidin resin, washed, and digested directly on the resin. Resin-based digestion affords clean product that is devoid of partially digested products and unincorporated PCR primers. The product's complementary ends are annealed and ligated together with T4 DNA ligase. Analysis of ligation products shows formation of concatemers of different length and little detectable monomer. Sequencing results produced data that routinely contained three to four copies of the library. This method allows for more efficient formulation of structure-activity relationships since multiple active sequences are identified from a single clone. PMID:18065718

  3. Extensive Direct Subcortical Cerebellum-Basal Ganglia Connections in Human Brain as Revealed by Constrained Spherical Deconvolution Tractography

    Directory of Open Access Journals (Sweden)

    Demetrio eMilardi

    2016-03-01

    Full Text Available The connections between the cerebellum and basal ganglia were assumed to occur at the level of neocortex. However evidences from animal data have challenged this old perspective showing extensive subcortical pathways linking the cerebellum with the basal ganglia. Here we tested the hypothesis if these connections also exist between the cerebellum and basal ganglia in the human brain by using diffusion magnetic resonance imaging and tractography. Fifteen healthy subjects were analyzed by using constrained spherical deconvolution technique obtained with a 3T magnetic resonance imaging scanner. We found extensive connections running between the subthalamic nucleus and cerebellar cortex and, as novel result, we demonstrated a direct route linking the dentate nucleus to the internal globus pallidus as well as to the substantia nigra. These findings may open a new scenario on the interpretation of basal ganglia disorders.

  4. Direct deconvolution of electric and magnetic responses of single nanoparticles by Fourier space surface plasmon resonance microscopy

    Science.gov (United States)

    Liu, C.; Chan, C. F.; Ong, H. C.

    2016-11-01

    We use polarization-resolved surface plasmon resonance microscopy to image single dielectric nanoparticles. In real space, the nanoparticles exhibit V-shape diffraction patterns due to the interference between the incident surface plasmon polariton wave and the evanescent scattered waves, which arise from the interplay between the electric and magnetic dipoles of the nanoparticle. By using cross-polarized Fourier space imaging to extract only the scattered waves, we find the angular far-field intensity corresponds very well to the near-field scattering distribution, as confirmed by both analytical and numerical calculations. As a result, we directly deconvolute the contributions of electric and magnetic dipoles to the scattered fields without involving near-field techniques.

  5. The Mathematics of a Successful Deconvolution: A Quantitative Assessment of Mixture-Based Combinatorial Libraries Screened Against Two Formylpeptide Receptors

    Science.gov (United States)

    Santos, Radleigh G.; Appel, Jon R.; Giulianotti, Marc A.; Edwards, Bruce S.; Sklar, Larry A.; Houghten, Richard A.; Pinilla, Clemencia

    2014-01-01

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays. PMID:23722730

  6. Application of spectral deconvolution and inverse mechanistic modelling as a tool for root cause investigation in protein chromatography.

    Science.gov (United States)

    Brestrich, Nina; Hahn, Tobias; Hubbuch, Jürgen

    2016-03-11

    In chromatographic protein purification, process variations, aging of columns, or processing errors can lead to deviations of the expected elution behavior of product and contaminants and can result in a decreased pool purity or yield. A different elution behavior of all or several involved species leads to a deviating chromatogram. The causes for deviations are however hard to identify by visual inspection and complicate the correction of a problem in the next cycle or batch. To overcome this issue, a tool for root cause investigation in protein chromatography was developed. The tool combines a spectral deconvolution with inverse mechanistic modelling. Mid-UV spectral data and Partial Least Squares Regression were first applied to deconvolute peaks to obtain the individual elution profiles of co-eluting proteins. The individual elution profiles were subsequently used to identify errors in process parameters by curve fitting to a mechanistic chromatography model. The functionality of the tool for root cause investigation was successfully demonstrated in a model protein study with lysozyme, cytochrome c, and ribonuclease A. Deviating chromatograms were generated by deliberately caused errors in the process parameters flow rate and sodium-ion concentration in loading and elution buffer according to a design of experiments. The actual values of the three process parameters and, thus, the causes of the deviations were estimated with errors of less than 4.4%. Consequently, the established tool for root cause investigation is a valuable approach to rapidly identify process variations, aging of columns, or processing errors. This might help to minimize batch rejections or contribute to an increased productivity.

  7. Deconvolution analysis of 24-h serum cortisol profiles informs the amount and distribution of hydrocortisone replacement therapy.

    Science.gov (United States)

    Peters, Catherine J; Hill, Nathan; Dattani, Mehul T; Charmandari, Evangelia; Matthews, David R; Hindmarsh, Peter C

    2013-03-01

    Hydrocortisone therapy is based on a dosing regimen derived from estimates of cortisol secretion, but little is known of how the dose should be distributed throughout the 24 h. We have used deconvolution analysis of 24-h serum cortisol profiles to determine 24-h cortisol secretion and distribution to inform hydrocortisone dosing schedules in young children and older adults. Twenty four hour serum cortisol profiles from 80 adults (41 men, aged 60-74 years) and 29 children (24 boys, aged 5-9 years) were subject to deconvolution analysis using an 80-min half-life to ascertain total cortisol secretion and distribution throughout the 24-h period. Mean daily cortisol secretion was similar between adults (6.3 mg/m(2) body surface area/day, range 5.1-9.3) and children (8.0 mg/m(2) body surface area/day, range 5.3-12.0). Peak serum cortisol concentration was higher in children compared with adults, whereas nadir serum cortisol concentrations were similar. Timing of the peak serum cortisol concentration was similar (07.05-07.25), whereas that of the nadir concentration occurred later in adults (midnight) compared with children (22.48) (P = 0.003). Children had the highest percentage of cortisol secretion between 06.00 and 12.00 (38.4%), whereas in adults this took place between midnight and 06.00 (45.2%). These observations suggest that the daily hydrocortisone replacement dose should be equivalent on average to 6.3 mg/m(2) body surface area/day in adults and 8.0 mg/m(2) body surface area/day in children. Differences in distribution of the total daily dose between older adults and young children need to be taken into account when using a three or four times per day dosing regimen. © 2012 Blackwell Publishing Ltd.

  8. Reconstruction of high resolution time series from slow-response broadband solar and terrestrial irradiance measurements by deconvolution

    Directory of Open Access Journals (Sweden)

    A. Ehrlich

    2015-05-01

    Full Text Available Broadband solar and terrestrial irradiance measurements of high temporal resolution are needed to study inhomogeneous clouds or surfaces and to derive vertical profiles of heating/cooling rates at cloud top. An efficient method to enhance the temporal resolution of slow-response measurements of broadband irradiance using pyranometer or pyrgeometer is introduced. It is based on the deconvolution theorem of Fourier transform to restore amplitude and phase shift of high frequent fluctuations. It is shown that the quality of reconstruction depends on the instrument noise, the pyrgeometer response time and the frequency of the oscillations. The method is tested in laboratory measurements for synthetic time series including a boxcar function and periodic oscillations using a CGR-4 pyrgeometer with response time of 3 s. The originally slow-response pyrgeometer data were reconstructed to higher resolution and compared to the predefined synthetic time series. The reconstruction of the time series worked up to oscillations of 0.5 Hz frequency and 2 W m−2 amplitude if the sampling frequency of the data acquisition is 16 kHz or higher. For oscillations faster than 2 Hz the instrument noise exceeded the reduced amplitude of the oscillations in the measurements and the reconstruction failed. The method was applied to airborne measurements of upward terrestrial irradiance from the VERDI (Vertical Distribution of Ice in Arctic Clouds field campaign. Pyrgeometer data above open leads in sea ice and a broken cloud field were reconstructed and compared to KT19 infrared thermometer data. The reconstruction of amplitude and phase shift of the deconvoluted data improved the agreement with the KT19 data. Cloud top temperatures were improved by up to 1 K above broken clouds while an underestimation of 2.5 W m−2 was found for the upward irradiance over small leads when using the slow-response data. The limitations of the method with respect to instrument noise and

  9. Matrix Information Geometry

    CERN Document Server

    Bhatia, Rajendra

    2013-01-01

    This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR).  During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented  are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.  

  10. Ceramic Matrix Composites .

    Directory of Open Access Journals (Sweden)

    J. Mukerji

    1993-10-01

    Full Text Available The present state of the knowledge of ceramic-matrix composites have been reviewed. The fracture toughness of present structural ceramics are not enough to permit design of high performance machines with ceramic parts. They also fail by catastrophic brittle fracture. It is generally believed that further improvement of fracture toughness is only possible by making composites of ceramics with ceramic fibre, particulate or platelets. Only ceramic-matrix composites capable of working above 1000 degree centigrade has been dealt with keeping reinforced plastics and metal-reinforced ceramics outside the purview. The author has discussed the basic mechanisms of toughening and fabrication of composites and the difficulties involved. Properties of available fibres and whiskers have been given. The best results obtained so far have been indicated. The limitations of improvement in properties of ceramic-matrix composites have been discussed.

  11. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  12. Extracellular matrix structure.

    Science.gov (United States)

    Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K

    2016-02-01

    Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented.

  13. Finite Temperature Matrix Theory

    CERN Document Server

    Meana, M L; Peñalba, J P; Meana, Marco Laucelli; Peñalba, Jesús Puente

    1998-01-01

    We present the way the Lorentz invariant canonical partition function for Matrix Theory as a light-cone formulation of M-theory can be computed. We explicitly show how when the eleventh dimension is decompactified, the N=1 eleven dimensional SUGRA partition function appears. From this particular analysis we also clarify the question about the discernibility problem when making statistics with supergravitons (the N! problem) in Matrix black hole configurations. We also provide a high temperature expansion which captures some structure of the canonical partition function when interactions amongst D-particles are on. The connection with the semi-classical computations thermalizing the open superstrings attached to a D-particle is also clarified through a Born-Oppenheimer approximation. Some ideas about how Matrix Theory would describe the complementary degrees of freedom of the massless content of eleven dimensional SUGRA are also discussed.

  14. A method for dynamic spectrophotometric measurements in vivo using principal component analysis-based spectral deconvolution.

    Science.gov (United States)

    Zupancic, Gregor

    2003-10-01

    A method was developed for dynamic spectrophotometric measurements in vivo in the presence of non-specific spectral changes due to external disturbances. This method was used to measure changes in mitochondrial respiratory pigment redox states in photoreceptor cells of live, white-eyed mutants of the blowfly Calliphora vicina. The changes were brought about by exchanging the atmosphere around an immobilised animal from air to N2 and back again by a rapid gas exchange system. During an experiment reflectance spectra were measured by a linear CCD array spectrophotometer. This method involves the pre-processing steps of difference spectra calculation and digital filtering in one and two dimensions. These were followed by time-domain principal component analysis (PCA). PCA yielded seven significant time domain principal component vectors and seven corresponding spectral score vectors. In addition, through PCA we also obtained a time course of changes common to all wavelengths-the residual vector, corresponding to non-specific spectral changes due to preparation movement or mitochondrial swelling. In the final step the redox state time courses were obtained by fitting linear combinations of respiratory pigment difference spectra to each of the seven score vectors. The resulting matrix of factors was then multiplied by the matrix of seven principal component vectors to yield the time courses of respiratory pigment redox states. The method can be used, with minor modifications, in many cases of time-resolved optical measurements of multiple overlapping spectral components, especially in situations where non-specific external influences cannot be disregarded.

  15. Matrixed business support comparison study.

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Josh D.

    2004-11-01

    The Matrixed Business Support Comparison Study reviewed the current matrixed Chief Financial Officer (CFO) division staff models at Sandia National Laboratories. There were two primary drivers of this analysis: (1) the increasing number of financial staff matrixed to mission customers and (2) the desire to further understand the matrix process and the opportunities and challenges it creates.

  16. IIB Matrix Model

    CERN Document Server

    Aoki, H; Kawai, H; Kitazawa, Y; Tada, T; Tsuchiya, A

    1999-01-01

    We review our proposal for a constructive definition of superstring, type IIB matrix model. The IIB matrix model is a manifestly covariant model for space-time and matter which possesses N=2 supersymmetry in ten dimensions. We refine our arguments to reproduce string perturbation theory based on the loop equations. We emphasize that the space-time is dynamically determined from the eigenvalue distributions of the matrices. We also explain how matter, gauge fields and gravitation appear as fluctuations around dynamically determined space-time.

  17. Little IIB Matrix Model

    CERN Document Server

    Kitazawa, Y; Saito, O; Kitazawa, Yoshihisa; Mizoguchi, Shun'ya; Saito, Osamu

    2006-01-01

    We study the zero-dimensional reduced model of D=6 pure super Yang-Mills theory and argue that the large N limit describes the (2,0) Little String Theory. The one-loop effective action shows that the force exerted between two diagonal blocks of matrices behaves as 1/r^4, implying a six-dimensional spacetime. We also observe that it is due to non-gravitational interactions. We construct wave functions and vertex operators which realize the D=6, (2,0) tensor representation. We also comment on other "little" analogues of the IIB matrix model and Matrix Theory with less supercharges.

  18. Elementary matrix algebra

    CERN Document Server

    Hohn, Franz E

    2012-01-01

    This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur

  19. Rheocasting Al Matrix Composites

    Science.gov (United States)

    Girot, F. A.; Albingre, L.; Quenisset, J. M.; Naslain, R.

    1987-11-01

    Aluminum alloy matrix composites reinforced by SiC short fibers (or whiskers) can be prepared by rheocasting, a process which consists of the incorporation and homogeneous distribution of the reinforcement by stirring within a semi-solid alloy. Using this technique, composites containing fiber volume fractions in the range of 8-15%, have been obtained for various fibers lengths (i.e., 1 mm, 3 mm and 6 mm for SiC fibers). This paper attempts to delineate the best compocasting conditions for aluminum matrix composites reinforced by short SiC (e.g Nicalon) or SiC whiskers (e.g., Tokamax) and characterize the resulting microstructures.

  20. Reduced Google matrix

    CERN Document Server

    Frahm, K M

    2016-01-01

    Using parallels with the quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos, we construct a reduced Google matrix $G_R$ which describes the properties and interactions of a certain subset of selected nodes belonging to a much larger directed network. The matrix $G_R$ takes into account effective interactions between subset nodes by all their indirect links via the whole network. We argue that this approach gives new possibilities to analyze effective interactions in a group of nodes embedded in a large directed networks. Possible efficient numerical methods for the practical computation of $G_R$ are also described.

  1. Density matrix perturbation theory.

    Science.gov (United States)

    Niklasson, Anders M N; Challacombe, Matt

    2004-05-14

    An orbital-free quantum perturbation theory is proposed. It gives the response of the density matrix upon variation of the Hamiltonian by quadratically convergent recursions based on perturbed projections. The technique allows treatment of embedded quantum subsystems with a computational cost scaling linearly with the size of the perturbed region, O(N(pert.)), and as O(1) with the total system size. The method allows efficient high order perturbation expansions, as demonstrated with an example involving a 10th order expansion. Density matrix analogs of Wigner's 2n+1 rule are also presented.

  2. Complex matrix model duality

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.W.

    2010-11-15

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  3. Dynamic Matrix Rank

    DEFF Research Database (Denmark)

    Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands

    2009-01-01

    We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entri...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions.......We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...

  4. Empirical codon substitution matrix

    Directory of Open Access Journals (Sweden)

    Gonnet Gaston H

    2005-06-01

    Full Text Available Abstract Background Codon substitution probabilities are used in many types of molecular evolution studies such as determining Ka/Ks ratios, creating ancestral DNA sequences or aligning coding DNA. Until the recent dramatic increase in genomic data enabled construction of empirical matrices, researchers relied on parameterized models of codon evolution. Here we present the first empirical codon substitution matrix entirely built from alignments of coding sequences from vertebrate DNA and thus provide an alternative to parameterized models of codon evolution. Results A set of 17,502 alignments of orthologous sequences from five vertebrate genomes yielded 8.3 million aligned codons from which the number of substitutions between codons were counted. From this data, both a probability matrix and a matrix of similarity scores were computed. They are 64 × 64 matrices describing the substitutions between all codons. Substitutions from sense codons to stop codons are not considered, resulting in block diagonal matrices consisting of 61 × 61 entries for the sense codons and 3 × 3 entries for the stop codons. Conclusion The amount of genomic data currently available allowed for the construction of an empirical codon substitution matrix. However, more sequence data is still needed to construct matrices from different subsets of DNA, specific to kingdoms, evolutionary distance or different amount of synonymous change. Codon mutation matrices have advantages for alignments up to medium evolutionary distances and for usages that require DNA such as ancestral reconstruction of DNA sequences and the calculation of Ka/Ks ratios.

  5. Matrix Embedded Organic Synthesis

    Science.gov (United States)

    Kamakolanu, U. G.; Freund, F. T.

    2016-05-01

    In the matrix of minerals such as olivine, a redox reaction of the low-z elements occurs. Oxygen is oxidized to the peroxy state while the low-Z-elements become chemically reduced. We assign them a formula [CxHyOzNiSj]n- and call them proto-organics.

  6. A deconvolution procedure for determination of a fluorescence decay waveform applicable to a band-limited measurement system that has a time delay

    Science.gov (United States)

    Iwata, Tetsuo; Shibata, Hironobu; Araki, Tsutomu

    2008-01-01

    We propose a deconvolution procedure used for deriving a true fluorescence decay waveform, which is applicable to a band-limited measurement system that has a time delay. In general, the fluorescence decay response has a time delay over the excitation light pulse mainly due to the wavelength dependence of the time delay of the response of a photomultiplier tube (PMT). Furthermore, the frequency band is limited instrumentally or often should be cut off purposely at a relatively low frequency to eliminate high frequency noise. In order to extrapolate such a band-limited response function and to perform a deconvolution procedure in a frequency domain, finally, we introduce a delay time parameter in the conventional Gerchberg-Papoulis (GP) algorithm. We also demonstrate that the Hilbert-transform (HT)-based extrapolation method that we have proposed quite recently is equivalent to the GP algorithm. Simulation and experimental results show that the new method is effective.

  7. Deconvolution of a linear combination of Gaussian kernels by an inhomogeneous Fredholm integral equation of second kind and applications to image processing

    CERN Document Server

    Ulmer, Waldemar

    2011-01-01

    Scatter processes of photons lead to blurring of images. Multiple scatter can usually be described by one Gaussian convolution kernel. This can be a crude approximation and we need a linear combination of 2/3 Gaussian kernels to account for tails.If image structures are recorded by appropriate measurements, these structures are always blurred. The ideal image (source function without any blurring) is subjected to Gaussian convolutions to yield a blurred image, which is recorded by a detector array. The inverse problem of this procedure is the determination of the ideal source image from really determined image. If the scatter parameters are known, we are able to calculate the idealistic source structure by a deconvolution. We shall extend it to linear combinations of two/three Gaussian convolution kernels in order to found applications to aforementioned image processing, where a single Gaussian kernel would be crude. In this communication, we shall derive a new deconvolution method for a linear combination of...

  8. Deconvolution and chromatic aberration corrections in quantifying colocalization of a transcription factor in three-dimensional cellular space.

    Science.gov (United States)

    Abraham, Thomas; Allan, Sarah E; Levings, Megan K

    2010-08-01

    In the realm of multi-dimensional confocal microscopy, colocalization analysis of fluorescent emission signals has proven to be an invaluable tool for detecting molecular interactions between biological macromolecules at the subcellular level. We show here that image processing operations such as the deconvolution and chromatic corrections play a crucial role in the accurate determination of colocalization between biological macromolecules particularly when the fluorescent signals are faint, and when the fluorescent signals are in the blue and red emission regions. The cellular system presented here describes quantification of an activated forkhead box P3 (FOXP3) transcription factor in three-dimensional (3D) cellular space. 293T cells transfected with a conditionally active form of FOXP3 were stained for anti-FOXP3 conjugated to a fluorescent red dye (Phycoerythrin), and counterstained for DNA (nucleus) with fluorescent blue dye (Hoechst). Due to the broad emission spectra of these dyes, the fluorescent signals were collected only from peak regions and were acquired sequentially. Since the PE signal was weak, a confocal pinhole size of two Airy size was used to collect the 3D image data sets. The raw images supplemented with the spectral data show the preferential association of activated FOXP3 molecules with the nucleus. However, the PE signals were found to be highly diffusive and colocalization quantification from these raw images was not possible. In order to deconvolve the 3D raw image data set, point spread functions (PSFs) of these emissions were measured. From the measured PSF, we found that chromatic shifts between the blue and red colors were quite considerable. Followed by the applications of both the axial and lateral chromatic corrections, colocalization analysis performed on the deconvolved-chromatic corrected-3D image data set showed that 98% of DNA molecules were associated with FOXP3 molecules, whereas only 66% of FOXP3 molecules were colocalized

  9. Reconstruction of high-resolution time series from slow-response broadband terrestrial irradiance measurements by deconvolution

    Directory of Open Access Journals (Sweden)

    A. Ehrlich

    2015-09-01

    Full Text Available Broadband solar and terrestrial irradiance measurements of high temporal resolution are needed to study inhomogeneous clouds or surfaces and to derive vertical profiles of heating/cooling rates at cloud top. An efficient method to enhance the temporal resolution of slow-response measurements of broadband terrestrial irradiance using pyrgeometer is introduced. It is based on the deconvolution theorem of Fourier transform to restore amplitude and phase shift of high frequent fluctuations. It is shown that the quality of reconstruction depends on the instrument noise, the pyrgeometer response time and the frequency of the oscillations. The method is tested in laboratory measurements for synthetic time series including a boxcar function and periodic oscillations using a CGR-4 pyrgeometer with response time of 3 s. The originally slow-response pyrgeometer data were reconstructed to higher resolution and compared to the predefined synthetic time series. The reconstruction of the time series worked up to oscillations of 0.5 Hz frequency and 2 W m−2 amplitude if the sampling frequency of the data acquisition is 16 kHz or higher. For oscillations faster than 2 Hz, the instrument noise exceeded the reduced amplitude of the oscillations in the measurements and the reconstruction failed. The method was applied to airborne measurements of upward terrestrial irradiance from the VERDI (Vertical Distribution of Ice in Arctic Clouds field campaign. Pyrgeometer data above open leads in sea ice and a broken cloud field were reconstructed and compared to KT19 infrared thermometer data. The reconstruction of amplitude and phase shift of the deconvoluted data improved the agreement with the KT19 data. Cloud top temperatures were improved by up to 1 K above broken clouds of 80–800 m size (1–10 s flight time while an underestimation of 2.5 W m−2 was found for the upward irradiance over small leads of about 600 m diameter (10 s flight time when using the slow

  10. Multiple Suppression from 2-D Shallow Marine Seismic Reflection Data Using Filtering and Deconvolution Approaches: A Case Study from Southwest Taiwan

    Science.gov (United States)

    Boyaci, Fatma Sinem

    A primary objective of the seismic data processing workflow is to improve the signal to noise ratio. A seismic record has many types of noise besides primary reflections which convey the vital information. A non-negligible part of these noises is multiple reflections causing difficulties and misunderstandings. This work examines filtering techniques with different methods and deconvolution technique in an effort to attenuate multiples on a 2D line of marine data from southwest of the Taiwan and compares of their results. Prior to evaluating methods for attenuating multiples, basic seismic processing was applied to the data. This consisted of the following: zeroing bad traces, applying a spherical divergence correction, and band-pass filtering. The data were then sorted into common-mid-point (CMP) gathers. These CMP gathers were analyzed, and stacking velocities were determined so that Normal Move-out (NMO) processing and stacking can be applied. Following this basic processing, two methods of multiple suppression were applied separately and evaluated: 1) filtering; 2) deconvolution. The filtering methods included stacking, frequency(f)-wavenumber(k) filtering and the Radon Transform methods were applied in an effort to separate multiples and primaries. Deconvolution was also utilized. Finally, the results of these approaches were discussed and compared with the goal of obtaining reasonable results. For this data set, it appears that the Radon Transform attenuates the long-period multiples better than the other approaches. Applying deconvolution on Radon-filtered data also shows better results. Stacked and migrated section of the data was considered as the final image.

  11. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    Science.gov (United States)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  12. Statistical modeling of deconvolution procedures for improving the resolution of measuring electron temperature profiles in tokamak plasmas by Thomson scattering lidar

    Science.gov (United States)

    Dreischuh, Tanja N.; Gurdev, Ljuan L.; Stoyanov, Dimitar V.

    2010-10-01

    The potentialities are investigated, by statistical modeling, of deconvolution techniques for high-resolution restoration of electron temperature profiles in fusion plasma reactors like Joint European Torus (JET) measured by Thomson scattering lidar using the center-of-mass wavelength approach. The sensing laser pulse shape and the receiving-system response function are assumed to be exponentially-shaped. The plasma light background influence is taken into account as well as the Poisson fluctuations of the photoelectron number after the photocathode enhanced in the process of cascade multiplying in the employed microchannel photomultiplier tube. It is shown that the Fourier-deconvolution of the measured long-pulse (lidar-response-convolved) lidar profiles, at relatively high and low signal-to-noise ratios, ensures a higher accuracy of recovering the electron temperature profiles with three times higher range resolution compared to the case without deconvolution. The final resolution scale is determined by the width of the window of an optimum monotone sharp-cutoff digital noise-suppressing (noise-controlling) filter applied to the measured lidar profiles.

  13. Improving accuracy in the quantitation of overlapping, asymmetric, chromatographie peaks by deconvolution: theory and application to coupled gas chromatography atomic absorption spectrometry

    Science.gov (United States)

    Johansson, M.; Berglund, M.; Baxter, D. C.

    1993-09-01

    Systematic errors in the measurement of overlapping asymmetric, Chromatographic peaks are observed using the perpendicular-drop and tangent-skimming algorithms incorporated in commercial integrators. The magnitude of such errors increases with the degree of tailing and differences in peak size, and was found to be as great as 80% for peak-area and 100% for peak-height measurements made on the smaller, second component of simulated, noise-free chromatograms containing peaks at a size ratio of 10 to 1. Initial deconvolution of overlapping peaks, by mathematical correction for asymmetry, leads to significant improvements in the accuracy of both peak-area and height measurements using the simple, perpendicular-drop algorithm. A comparison of analytical data for the separation and determination of three organolead species by coupled gas chromatography atomic absorption spectrometry using peak-height and area measurements also demonstrates the improved accuracy obtained following deconvolution. It is concluded that the deconvolution method described could be beneficial in a variety of Chromatographic applications where overlapping, asymmetric peaks are observed.

  14. Independent component analysis (ICA) algorithms for improved spectral deconvolution of overlapped signals in 1H NMR analysis: application to foods and related products.

    Science.gov (United States)

    Monakhova, Yulia B; Tsikin, Alexey M; Kuballa, Thomas; Lachenmeier, Dirk W; Mushtakova, Svetlana P

    2014-05-01

    The major challenge facing NMR spectroscopic mixture analysis is the overlapping of signals and the arising impossibility to easily recover the structures for identification of the individual components and to integrate separated signals for quantification. In this paper, various independent component analysis (ICA) algorithms [mutual information least dependent component analysis (MILCA); stochastic non-negative ICA (SNICA); joint approximate diagonalization of eigenmatrices (JADE); and robust, accurate, direct ICA algorithm (RADICAL)] as well as deconvolution methods [simple-to-use-interactive self-modeling mixture analysis (SIMPLISMA) and multivariate curve resolution-alternating least squares (MCR-ALS)] are applied for simultaneous (1)H NMR spectroscopic determination of organic substances in complex mixtures. Among others, we studied constituents of the following matrices: honey, soft drinks, and liquids used in electronic cigarettes. Good quality spectral resolution of up to eight-component mixtures was achieved (correlation coefficients between resolved and experimental spectra were not less than 0.90). In general, the relative errors in the recovered concentrations were below 12%. SIMPLISMA and MILCA algorithms were found to be preferable for NMR spectra deconvolution and showed similar performance. The proposed method was used for analysis of authentic samples. The resolved ICA concentrations match well with the results of reference gas chromatography-mass spectrometry as well as the MCR-ALS algorithm used for comparison. ICA deconvolution considerably improves the application range of direct NMR spectroscopy for analysis of complex mixtures. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Analysis of protein film voltammograms as Michaelis-Menten saturation curves yield the electron cooperativity number for deconvolution.

    Science.gov (United States)

    Heering, Hendrik A

    2012-10-01

    Deconvolution of protein film voltammetric data by fitting multiple components (sigmoids, derivative peaks) often is ambiguous when features are partially overlapping, due to exchangeability between the width and the number of components. Here, a new method is presented to obtain the width of the components. This is based on the equivalence between the sigmoidal catalytic response as function of electrode potential, and the classical saturation curve obtained for the enzyme activity as function of the soluble substrate concentration, which is also sigmoidal when plotted versus log[S]. Thus, analysis of the catalytic voltammogram with Lineweaver-Burk, Eadie-Hofstee, and Hanes-Woolf plots is feasible. This provides a very sensitive measure of the cooperativity number (Hill coefficient), which for electrons equals the apparent (fractional) number of electrons that determine the width, and thereby the number of components (kinetic phases). This analysis is applied to the electrocatalytic oxygen reduction by Paracoccus denitrificans cytochrome aa(3) (cytochrome c oxidase). Four partially overlapping kinetic phases are observed that (stepwise) increase the catalytic efficiency with increasingly reductive potential. Translated to cell biology, the activity of the terminal oxidase stepwise adapts to metabolic demand for oxidative phosphorylation. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Imaging finite-fault earthquake source by iterative deconvolution and stacking (IDS) of near-field complete seismograms

    Science.gov (United States)

    Wang, Rongjiang; Zhang, Yong; Zschau, Jochen; Chen, Yun-tai; Parolai, Stefano; Diao, Faqi; Dahm, Torsten

    2015-04-01

    By combining the complementary advantages of conventional inversion and back-projection methods, we have developed an iterative deconvolution and stacking (IDS) approach for imaging earthquake rupture processes with near-field complete waveform data. This new approach does not need any manual adjustment of the physical (empirical) constraints, such as restricting the rupture time and duration, smoothing the spatiotemporal slip distribution, etc., and therefore has the ability to image complex multiple ruptures automatically. The advantages of the IDS method over traditional linear or non-linear optimization algorithms are demonstrated by the case studies of the 2008 Wenchuan (China), 2011 Tohoku (Japan) and 2014 Pisagua-Iquique (Chile) earthquakes. For such large earthquakes, the IDS method is considerably more stable and efficient than previous inversion methods. Additionally, the robustness of this method is demonstrated by comprehensive synthetic tests, indicating its potential contribution to tsunami early warning and earthquake rapid response systems. It is also shown that the IDS method can be used for teleseismic waveform inversions. For the 2011 Tohoku earthquakes, for example, the IDS method can provide, without tuning any physical or empirical constraints, teleseismic rupture models consistent with those derived from the near-field GPS and strong-motion data.

  17. SUPER-RESOLUTION AND DE-CONVOLUTION FOR SINGLE/MULTI GRAY SCALE IMAGES USING SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    Ritu Soni

    2015-10-01

    Full Text Available This paper represent a Blind algorithm that restore the blurred images for single image and multi-image blur de-convolution and multi-image super-resolution on low-resolution images deteriorated by additive white Gaussian noise ,the aliasing and linear space-invariant. Image De-blurring is a field of Image Processing in which recovering an original and sharp image from a corrupted image. Proposed method is based on alternating minimization algorithm with respect to unidentified blurs and high-resolution image and the Huber-markov random field(HMRF model for its ability to preserve discontinuities of a image and used for the regularization that exploits the piecewise smooth nature of the HR image. SIFT algorithm is used for feature extraction in a image and produce matching features based on Euclidean distance of their feature vectors that help in calculation of PSF. For blur estimation, edge-emphasizing smoothing operation is used to improve the quality of blur by enhancing the strong soft edges. In filter domain the blur estimation process can be done rather than the pixel domain for better performance that means which uses the gradient of HR and LR images for better performance.

  18. Cell-type deconvolution with immune pathways identifies gene networks of host defense and immunopathology in leprosy

    Science.gov (United States)

    Inkeles, Megan S.; Teles, Rosane M.B.; Pouldar, Delila; Andrade, Priscila R.; Madigan, Cressida A.; Ambrose, Mike; Sarno, Euzenir N.; Rea, Thomas H.; Ochoa, Maria T.; Iruela-Arispe, M. Luisa; Swindell, William R.; Ottenhoff, Tom H.M.; Geluk, Annemieke; Bloom, Barry R.

    2016-01-01

    Transcriptome profiles derived from the site of human disease have led to the identification of genes that contribute to pathogenesis, yet the complex mixture of cell types in these lesions has been an obstacle for defining specific mechanisms. Leprosy provides an outstanding model to study host defense and pathogenesis in a human infectious disease, given its clinical spectrum, which interrelates with the host immunologic and pathologic responses. Here, we investigated gene expression profiles derived from skin lesions for each clinical subtype of leprosy, analyzing gene coexpression modules by cell-type deconvolution. In lesions from tuberculoid leprosy patients, those with the self-limited form of the disease, dendritic cells were linked with MMP12 as part of a tissue remodeling network that contributes to granuloma formation. In lesions from lepromatous leprosy patients, those with disseminated disease, macrophages were linked with a gene network that programs phagocytosis. In erythema nodosum leprosum, neutrophil and endothelial cell gene networks were identified as part of the vasculitis that results in tissue injury. The present integrated computational approach provides a systems approach toward identifying cell-defined functional networks that contribute to host defense and immunopathology at the site of human infectious disease. PMID:27699251

  19. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    Science.gov (United States)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  20. Atlasing the frontal lobe connections and their variability due to age and education: a spherical deconvolution tractography study.

    Science.gov (United States)

    Rojkova, K; Volle, E; Urbanski, M; Humbert, F; Dell'Acqua, F; Thiebaut de Schotten, M

    2016-04-01

    In neuroscience, there is a growing consensus that higher cognitive functions may be supported by distributed networks involving different cerebral regions, rather than by single brain areas. Communication within these networks is mediated by white matter tracts and is particularly prominent in the frontal lobes for the control and integration of information. However, the detailed mapping of frontal connections remains incomplete, albeit crucial to an increased understanding of these cognitive functions. Based on 47 high-resolution diffusion-weighted imaging datasets (age range 22-71 years), we built a statistical normative atlas of the frontal lobe connections in stereotaxic space, using state-of-the-art spherical deconvolution tractography. We dissected 55 tracts including U-shaped fibers. We further characterized these tracts by measuring their correlation with age and education level. We reported age-related differences in the microstructural organization of several, specific frontal fiber tracts, but found no correlation with education level. Future voxel-based analyses, such as voxel-based morphometry or tract-based spatial statistics studies, may benefit from our atlas by identifying the tracts and networks involved in frontal functions. Our atlas will also build the capacity of clinicians to further understand the mechanisms involved in brain recovery and plasticity, as well as assist clinicians in the diagnosis of disconnection or abnormality within specific tracts of individual patients with various brain diseases.

  1. Deconvolution imaging of weak reflective pipe defects using guided-wave signals captured by a scanning receiver.

    Science.gov (United States)

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2017-02-01

    Guided-wave echoes from weak reflective pipe defects are usually interfered by coherent noise and difficult to interpret. In this paper, a deconvolution imaging method is proposed to reconstruct defect images from synthetically focused guided-wave signals, with enhanced axial resolution. A compact transducer, circumferentially scanning around the pipe, is used to receive guided-wave echoes from discontinuities at a distance. This method achieves a higher circumferential sampling density than arrayed transducers-up to 72 sampling spots per lap for a pipe with a diameter of 180 mm. A noise suppression technique is used to enhance the signal-to-noise ratio. The enhancement in both signal-to-noise ratio and axial resolution of the method is experimentally validated by the detection of two kinds of artificial defects: a pitting defect of 5 mm in diameter and 0.9 mm in maximum depth, and iron pieces attached to the pipe surface. A reconstructed image of the pitting defect is obtained with a 5.87 dB signal-to-noise ratio. It is revealed that a high circumferential sampling density is important for the enhancement of the inspection sensitivity, by comparing the images reconstructed with different down-sampling ratios. A modified full width at half maximum is used as the criterion to evaluate the circumferential extent of the region where iron pieces are attached, which is applicable for defects with inhomogeneous reflection intensity.

  2. Automated cancer stem cell recognition in H and E stained tissue using convolutional neural networks and color deconvolution

    Science.gov (United States)

    Aichinger, Wolfgang; Krappe, Sebastian; Cetin, A. Enis; Cetin-Atalay, Rengul; Üner, Aysegül; Benz, Michaela; Wittenberg, Thomas; Stamminger, Marc; Münzenmayer, Christian

    2017-03-01

    The analysis and interpretation of histopathological samples and images is an important discipline in the diagnosis of various diseases, especially cancer. An important factor in prognosis and treatment with the aim of a precision medicine is the determination of so-called cancer stem cells (CSC) which are known for their resistance to chemotherapeutic treatment and involvement in tumor recurrence. Using immunohistochemistry with CSC markers like CD13, CD133 and others is one way to identify CSC. In our work we aim at identifying CSC presence on ubiquitous Hematoxilyn and Eosin (HE) staining as an inexpensive tool for routine histopathology based on their distinct morphological features. We present initial results of a new method based on color deconvolution (CD) and convolutional neural networks (CNN). This method performs favorably (accuracy 0.936) in comparison with a state-of-the-art method based on 1DSIFT and eigen-analysis feature sets evaluated on the same image database. We also show that accuracy of the CNN is improved by the CD pre-processing.

  3. Constrained Spherical Deconvolution analysis of the limbic network in human, with emphasis on a direct cerebello-limbic pathway

    Directory of Open Access Journals (Sweden)

    Alessandro eArrigo

    2014-12-01

    Full Text Available The limbic system is part of an intricate network which is involved in several functions like memory and emotion. Traditionally the role of the cerebellum was considered mainly associated to motion control; however several evidences are raising about a role of the cerebellum in learning skills, emotions control, mnemonic and behavioral processes involving also connections with limbic system. In fifteen normal subjects we studied limbic connections by probabilistic Constrained Spherical Deconvolution (CSD tractography. The main result of our work was to prove for the first time in human brain the existence of a direct cerebello-limbic pathway which was previously hypothesized but never demonstrated. We also extended our analysis to the other limbic connections including cingulate fasciculus, inferior longitudinal fasciculus, uncinated fasciculus, anterior thalamic connections and fornix. Although these pathways have been already described in the tractographic literature we provided reconstruction, quantitative analysis and FA right-left symmetry comparison using probabilistic CSD tractography that is known to provide a potential improvement compared to previously used Diffusion Tensor Imaging techniques. The demonstration of the existence of cerebello-limbic pathway could constitute an important step in the knowledge of the anatomic substrate of non-motor cerebellar functions. Finally the CSD statistical data about limbic connections in healthy subjects could be potentially useful in the diagnosis of pathological disorders damaging this system.

  4. A posteriori analysis of low-pass spatial filters for approximate deconvolution large eddy simulations of homogeneous incompressible flows

    CERN Document Server

    San, Omer; Iliescu, Traian

    2014-01-01

    The goal of this paper is twofold: first, it investigates the effect of low-pass spatial filters for approximate deconvolution large eddy simulation (AD-LES) of turbulent incompressible flows. Second, it proposes the hyper-differential filter as a means of increasing the accuracy of the AD-LES model without increasing the computational cost. Box filters, Pad\\'{e} filters, and differential filters with a wide range of parameters are studied in the AD-LES framework. The AD-LES model, in conjunction with these spatial filters, is tested in the numerical simulation of the three-dimensional Taylor-Green vortex problem. The numerical results are benchmarked against direct numerical simulation (DNS) data. An under-resolved numerical simulation is also used for comparison purposes. Four criteria are used to investigate the AD-LES model equipped with these spatial filters: (i) the time series of the volume-averaged enstrophy; (ii) the volume-averaged third-order structure function; (iii) the $L^2$-norm of the velocity...

  5. Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors.

    Science.gov (United States)

    Bioucas-Dias, José M

    2006-04-01

    Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(N log N) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.

  6. Structural Analysis Of Ethylene/Acrylic And Methacrylic Acid Copolymers Using Fourier Self-Deconvolution Of Infrared Spectra

    Science.gov (United States)

    Harthcock, Matthew A.

    1985-12-01

    Fourier self-deconvolution (FSD) has been applied to several regions of the infrared spectra of ethylene/acrylic and methacrylic acid copolymers to obtain detailed information on the structure of these copolymers. The computer assisted technique has been applied to the 1050-830 cm-1 region of the infrared spectrum of the copolymers to resolve the vinyl (909 cm-1) and vinylidene (887 cm-1) CH2 wagging vibrations from the in-phase out-of-plane hydrogen deformation vibration of the acid dimer (943 cm-1). The technique was applied to the carbonyl stretching vibration region (1820-1660 cm-1) to study the structure of the acid groups. Two distinct hydrogen bonded (1710 and 1696 cm-1) and free (1758 and 1745 cm-1) acid group structures were observed for the 9% acrylic acid copolymers, while the methacrylic acid copolymer showed predominantly one hydrogen bonded (1696 cm-1) and one free (1758 cm-1) acid group structure. Also, the 6.5% acrylic acid copolymers showed essentially one type of hydrogen bonded (1706 cm-1) carbonyl and two free carbonyl stretching absorptions (1758 and 1745 cm-1).

  7. Convolution/deconvolution of generalized Gaussian kernels with applications to proton/photon physics and electron capture of charged particles

    CERN Document Server

    Ulmer, W

    2012-01-01

    Scatter processes of photons lead to blurring of images produced by CT (computed tomography) or CBCT (cone beam computed tomography) in the KV domain or portal imaging in the MV domain (KV: kilovolt age, MV: megavoltage). Multiple scatter is described by, at least, one Gaussian kernel. In various situations, this approximation is crude, and we need two/three Gaussian kernels to account for the long-range tails (Landau tails), which appear in the Moli\\`ere scatter of protons, energy straggling and electron capture of charged particles passing through matter and Compton scatter of photons. The ideal image (source function) is subjected to Gaussian convolutions to yield a blurred image recorded by a detector array. The inverse problem is to obtain the ideal source image from measured image. Deconvolution methods of linear combinations of two/three Gaussian kernels with different parameters s0, s1, s2 can be derived via an inhomogeneous Fredholm integral equation of second kind (IFIE2) and Liouville - Neumann ser...

  8. Matrix string theory

    Science.gov (United States)

    Dijkgraaf, Robbert; Verlinde, Erik; Verlinde, Herman

    1997-02-01

    Via compactification on a circle, the matrix mode] of M-theory proposed by Banks et a]. suggests a concrete identification between the large N limit of two-dimensional N = 8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.

  9. Matrix string theory

    Energy Technology Data Exchange (ETDEWEB)

    Dijkgraaf, R. [Amsterdam Univ. (Netherlands). Dept. of Mathematics; Verlinde, E. [TH-Division, CERN, CH-1211 Geneva 23 (Switzerland)]|[Institute for Theoretical Physics, Universtity of Utrecht, 3508 TA Utrecht (Netherlands); Verlinde, H. [Institute for Theoretical Physics, University of Amsterdam, 1018 XE Amsterdam (Netherlands)

    1997-09-01

    Via compactification on a circle, the matrix model of M-theory proposed by Banks et al. suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states. (orig.).

  10. Matrix String Theory

    CERN Document Server

    Dijkgraaf, R; Verlinde, Herman L

    1997-01-01

    Via compactification on a circle, the matrix model of M-theory proposed by Banks et al suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.

  11. Holomorphic matrix integrals

    CERN Document Server

    Felder, G; Felder, Giovanni; Riser, Roman

    2004-01-01

    We study a class of holomorphic matrix models. The integrals are taken over middle dimensional cycles in the space of complex square matrices. As the size of the matrices tends to infinity, the distribution of eigenvalues is given by a measure with support on a collection of arcs in the complex planes. We show that the arcs are level sets of the imaginary part of a hyperelliptic integral connecting branch points.

  12. Matrix groups for undergraduates

    CERN Document Server

    Tapp, Kristopher

    2016-01-01

    Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...

  13. Metal matrix Composites

    Directory of Open Access Journals (Sweden)

    Pradeep K. Rohatgi

    1993-10-01

    Full Text Available This paper reviews the world wide upsurge in metal matrix composite research and development activities with particular emphasis on cast metal-matrix particulate composites. Extensive applications of cast aluminium alloy MMCs in day-to-day use in transportation as well as durable good industries are expected to advance rapidly in the next decade. The potential for extensive application of cast composites is very large in India, especially in the areas of transportation, energy and electromechanical machinery; the extensive use of composites can lead to large savings in materials and energy, and in several instances, reduce environmental pollution. It is important that engineering education and short-term courses be organized to bring MMCs to the attention of students and engineering industry leaders. India already has excellent infrastructure for development of composites, and has a long track record of world class research in cast metal matrix particulate composites. It is now necessary to catalyze prototype and regular production of selected composite components, and get them used in different sectors, especially railways, cars, trucks, buses, scooters and other electromechanical machinery. This will require suitable policies backed up by funding to bring together the first rate talent in cast composites which already exists in India, to form viable development groups followed by setting up of production plants involving the process engineering capability already available within the country. On the longer term, cast composites should be developed for use in energy generation equipment, electronic packaging aerospace systems, and smart structures.

  14. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  15. Matrix Theory of Small Oscillations

    Science.gov (United States)

    Chavda, L. K.

    1978-01-01

    A complete matrix formulation of the theory of small oscillations is presented. Simple analytic solutions involving matrix functions are found which clearly exhibit the transients, the damping factors, the Breit-Wigner form for resonances, etc. (BB)

  16. Matrix Completions and Chordal Graphs

    Institute of Scientific and Technical Information of China (English)

    Kenneth John HARRISON

    2003-01-01

    In a matrix-completion problem the aim is to specifiy the missing entries of a matrix inorder to produce a matrix with particular properties. In this paper we survey results concerning matrix-completion problems where we look for completions of various types for partial matrices supported ona given pattern. We see that thc existence of completions of the required type often depends on thechordal properties of graphs associated with the pattern.

  17. THE GENERALIZED POLARIZATION SCATTERING MATRIX

    Science.gov (United States)

    the Least Square Best Estimate of the Generalized Polarization matrix from a set of measurements is then developed. It is shown that the Faraday...matrix data. It is then shown that the Least Square Best Estimate of the orientation angle of a symmetric target is also determinable from Faraday rotation contaminated short pulse monostatic polarization matrix data.

  18. The cellulose resource matrix.

    Science.gov (United States)

    Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G

    2013-03-01

    The emerging biobased economy is causing shifts from mineral fossil oil based resources towards renewable resources. Because of market mechanisms, current and new industries utilising renewable commodities, will attempt to secure their supply of resources. Cellulose is among these commodities, where large scale competition can be expected and already is observed for the traditional industries such as the paper industry. Cellulose and lignocellulosic raw materials (like wood and non-wood fibre crops) are being utilised in many industrial sectors. Due to the initiated transition towards biobased economy, these raw materials are intensively investigated also for new applications such as 2nd generation biofuels and 'green' chemicals and materials production (Clark, 2007; Lange, 2007; Petrus & Noordermeer, 2006; Ragauskas et al., 2006; Regalbuto, 2009). As lignocellulosic raw materials are available in variable quantities and qualities, unnecessary competition can be avoided via the choice of suitable raw materials for a target application. For example, utilisation of cellulose as carbohydrate source for ethanol production (Kabir Kazi et al., 2010) avoids the discussed competition with easier digestible carbohydrates (sugars, starch) deprived from the food supply chain. Also for cellulose use as a biopolymer several different competing markets can be distinguished. It is clear that these applications and markets will be influenced by large volume shifts. The world will have to reckon with the increase of competition and feedstock shortage (land use/biodiversity) (van Dam, de Klerk-Engels, Struik, & Rabbinge, 2005). It is of interest - in the context of sustainable development of the bioeconomy - to categorize the already available and emerging lignocellulosic resources in a matrix structure. When composing such "cellulose resource matrix" attention should be given to the quality aspects as well as to the available quantities and practical possibilities of processing the

  19. Matrix string partition function

    CERN Document Server

    Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre

    1998-01-01

    We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.

  20. Matrix vector analysis

    CERN Document Server

    Eisenman, Richard L

    2005-01-01

    This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur

  1. Random matrix theory

    CERN Document Server

    Deift, Percy

    2009-01-01

    This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive

  2. Supported Molecular Matrix Electrophoresis.

    Science.gov (United States)

    Matsuno, Yu-Ki; Kameyama, Akihiko

    2015-01-01

    Mucins are difficult to separate using conventional gel electrophoresis methods such as SDS-PAGE and agarose gel electrophoresis, owing to their large size and heterogeneity. On the other hand, cellulose acetate membrane electrophoresis can separate these molecules, but is not compatible with glycan analysis. Here, we describe a novel membrane electrophoresis technique, termed "supported molecular matrix electrophoresis" (SMME), in which a porous polyvinylidene difluoride (PVDF) membrane filter is used to achieve separation. This description includes the separation, visualization, and glycan analysis of mucins with the SMME technique.

  3. Matrix algebra for linear models

    CERN Document Server

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  4. Application of seismic interferometry by multidimensional deconvolution to ambient seismic noise recorded in Malargüe, Argentina

    Science.gov (United States)

    Weemstra, Cornelis; Draganov, Deyan; Ruigrok, Elmer N.; Hunziker, Jürg; Gomez, Martin; Wapenaar, Kees

    2017-02-01

    Obtaining new seismic responses from existing recordings is generally referred to as seismic interferometry (SI). Conventionally, the SI responses are retrieved by simple crosscorrelation of recordings made by separate receivers: one of the receivers acts as a `virtual source' whose response is retrieved at the other receivers. When SI is applied to recordings of ambient seismic noise, mostly surface waves are retrieved. The newly retrieved surface wave responses can be used to extract receiver-receiver phase velocities. These phase velocities often serve as input parameters for tomographic inverse problems. Another application of SI exploits the temporal stability of the multiply scattered arrivals of the newly retrieved surface wave responses. Temporal variations in the stability and/or arrival time of these multiply scattered arrivals can often be linked to temporally varying parameters such as hydrocarbon production and precipitation. For all applications, however, the accuracy of the retrieved responses is paramount. Correct response retrieval relies on a uniform illumination of the receivers: irregularities in the illumination pattern degrade the accuracy of the newly retrieved responses. In practice, the illumination pattern is often far from uniform. In that case, simple crosscorrelation of separate receiver recordings only yields an estimate of the actual, correct virtual-source response. Reformulating the theory underlying SI by crosscorrelation as a multidimensional deconvolution (MDD) process, allows this estimate to be improved. SI by MDD corrects for the non-uniform illumination pattern by means of a so-called point-spread function (PSF), which captures the irregularities in the illumination pattern. Deconvolution by this PSF removes the imprint of the irregularities on the responses obtained through simple crosscorrelation. We apply SI by MDD to surface wave data recorded by the Malargüe seismic array in western Argentina. The aperture of the array

  5. Extracellular Matrix Proteins

    Directory of Open Access Journals (Sweden)

    Linda Christian Carrijo-Carvalho

    2012-01-01

    Full Text Available Lipocalin family members have been implicated in development, regeneration, and pathological processes, but their roles are unclear. Interestingly, these proteins are found abundant in the venom of the Lonomia obliqua caterpillar. Lipocalins are β-barrel proteins, which have three conserved motifs in their amino acid sequence. One of these motifs was shown to be a sequence signature involved in cell modulation. The aim of this study is to investigate the effects of a synthetic peptide comprising the lipocalin sequence motif in fibroblasts. This peptide suppressed caspase 3 activity and upregulated Bcl-2 and Ki-67, but did not interfere with GPCR calcium mobilization. Fibroblast responses also involved increased expression of proinflammatory mediators. Increase of extracellular matrix proteins, such as collagen, fibronectin, and tenascin, was observed. Increase in collagen content was also observed in vivo. Results indicate that modulation effects displayed by lipocalins through this sequence motif involve cell survival, extracellular matrix remodeling, and cytokine signaling. Such effects can be related to the lipocalin roles in disease, development, and tissue repair.

  6. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Azcona, J [Department of Radiation Physics, Clinica Universidad de Navarra (Spain); Burguete, J [Universidad de Navarra, Pamplona, Navarra (Spain)

    2014-06-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated.

  7. Faster with CLEAN: an exploration of the effects of applying a nonlinear deconvolution method to a novel radiation mapper

    Science.gov (United States)

    Southgate, Matthew J.; Taylor, Christopher T.; Hutchinson, Simon; Bowring, Nicholas J.

    2014-10-01

    This paper examines the suitability and potential of reducing the acquisition requirements of a novel radiation mapper through the application of the non-linear deconvolution technique, CLEAN. The radiation mapper generates a threshold image of the target scene, at a user defined distance, using a single pixel detector manually scanned across the scene . This paper provides a discussion of the factors involved and merits of incorporating CLEAN into the system. In this paper we describe the modifications to the system for the generation of an intensity map and the relationship between resolution and acquisition time for a target scene. The factors influencing image fidelity for a scene are identified and discussed with the impact on fill-factor of the intensity image, which in turn determines the ability of the operator to accurately identify features of the radiation source within a target scene. The CLEAN algorithm and its variants have been extensively developed by the radio astronomy community to improve the image fidelity of data collected by sparse interferometric arrays. However, the algorithm has demonstrated surprising adaptability including terrestrial imagery, as detailed in Taylor et al. SPIE 9078-19 and Bose et al., IEEE 2002. CLEAN can be applied directly to raw data via a bespoke algorithm. However, this investigation is a proof-of-concept and thus requires a well tested verification method. We have opted to use the public ally available implementation of CLEAN found in the Common Astronomy Software Applications (CASA) package. The use of CASA for this purpose dictates the use of simulated input data and radio astronomy standard parameters. Finally, this paper presents the results of applying CLEAN to our simulated target scene, with a discussion of the potential merits a bespoke implementation would yield.

  8. Rolling bearing fault diagnosis based on time-delayed feedback monostable stochastic resonance and adaptive minimum entropy deconvolution

    Science.gov (United States)

    Li, Jimeng; Li, Ming; Zhang, Jinfeng

    2017-08-01

    Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.

  9. Measurements Of Coronary Mean Transit Time And Myocardial Tissue Blood Flow By Deconvolution Of Intravasal Tracer Dilution Curves

    Science.gov (United States)

    Korb, H.; Hoeft, A.; Hellige, G.

    1984-10-01

    Previous studies have shown that intramyocardial blood volume does not vary to a major extent even during extreme variation of hemodynamics and coronary vascular tone. Based on a constant intramyocardial blood volume it is therefore possible to calculate tissue blood flow from the mean transit time of an intravascular tracer. The purpose of this study was to develop a clinically applicable method for measurement of coronary blood flow. The new method was based on indocyanine green, a dye which is bound to albumin and intravasally detectable by means of a fiberoptic catheter device. One fiberoptic catheter was placed in the aortic root and another in the coronary sinus. After central venous dye injection the resulting arterial and coronary venous dye dilution curves were processed on-line by a micro-computer. The mean transit time as well as myocardial blood flow were calculated from the step response function of the deconvoluted arterial and coronary venous signals. Reference flow was determined with an extracorporeal electromagnetic flowprobe within a coronary sinus bypass system. 38 steady states with coronary blood flow ranging from 49 - 333 ml/min*100g were analysed in 5 dogs. Mean transit times varied from 2.9 to 16.6 sec. An average intracoronary blood volume of 13.9 -7 1.8 m1/100g was calculated. The correlation between flow determined by the dye dilution technique and flow measured with the reference method was 0.98. According to these results determination of coronary blood flow with a double fiberoptic system and indocyanine green should be possible even under clinical conditions. Furthermore, the arterial and coronary venous oxygen saturation can be monitored continuously by the fiberoptic catheters. Therefore, additional information about the performance of the heart such as myocardial oxygen consumption and myocardial efficiency is available with the same equipment.

  10. A New Method for Evaluating Actual Drug Release Kinetics of Nanoparticles inside Dialysis Devices via Numerical Deconvolution.

    Science.gov (United States)

    Zhou, Yousheng; He, Chunsheng; Chen, Kuan; Ni, Jieren; Cai, Yu; Guo, Xiaodi; Wu, Xiao Yu

    2016-12-10

    Nanoparticle formulations have found increasing applications in modern therapies. To achieve desired treatment efficacy and safety profiles, drug release kinetics of nanoparticles must be controlled tightly. However, actual drug release kinetics of nanoparticles cannot be readily measured due to technique difficulties, although various methods have been attempted. Among existing experimental approaches, dialysis method is the most widely applied one due to its simplicity and avoidance of separating released drug from the nanoparticles. Yet this method only measures the released drug in the medium outside a dialysis device (the receiver), instead of actual drug release from the nanoparticles inside the dialysis device (the donor). Thus we proposed a new method using numerical deconvolution to evaluate actual drug release kinetics of nanoparticles inside the donor based on experimental release profiles of nanoparticles and free drug solution in the receptor determined by existing dialysis tests. Two computer programs were developed based on two different numerical methods, namely least square criteria with prescribed Weibull function or orthogonal polynomials as input function. The former was used for all analyses in this work while the latter for verifying the reliability of the predictions. Experimental data of drug release from various nanoparticle formulations obtained from different dialysis settings and membrane pore sizes were used to substantiate this approach. The results demonstrated that this method is applicable to a broad range of nanoparticle and microparticle formulations requiring no additional experiments. It is independent of particle formulations, drug release mechanisms, and testing conditions. This new method may also be used, in combination with existing dialysis devices, to develop a standardized method for quality control, in vitro-in vivo correlation, and for development of nanoparticles and other types of dispersion formulations.

  11. Toward a rapid 3D spectral deconvolution of EMI conductivities measured with portable multi-configuration sensors

    Science.gov (United States)

    Guillemoteau, Julien; Tronicke, Jens

    2017-04-01

    Portable loop-loop electromagnetic induction (EMI) sensors using multiple coil configurations are of growing interest in hydrological, archaeological and agricultural studies for mapping the subsurface electrical conductivity. In contrast with EMI methods employing larger scale geometries (e.g., magnetotellurics, marine EM, airborne EM, transient EM, large offset loop-loop harmonic source EM), the portable EMI multi-configuration sensors operate in the low induction number (LIN) domain as they employ a rather low frequency harmonic source (< 20 kHz) and rather small coil separations (≤ 2 m). In the LIN domain, electrical conductivity has a minor effect on the forward modelling kernel. Accordingly, we have developed an algorithm to model this kind of data, which is based on a homogeneous half-space kernel. By formulating the problem in the hybrid spectral-spatial domain (kx, ky, z), we show that it is possible to generate large data maps containing more than 100,000 stations within a minute on a standard modern laptop computer. We compared this forward modelling approach to a robust approach based on the integral equation (IE) method. Our results show that, as long as the LIN approximation is fulfilled (i.e., for the system of interest, if the electrical conductivity is smaller than 0.5 S/m), the linear theory allows to accurately and robustly handle the structural characteristics of the subsurface conductivity distribution. We therefore expect that our forward modelling procedure can be implemented in rapid multi-channel deconvolution procedures in order to rapidly extract the structural properties of the subsurface conductivity distribution from data sets acquired across rather large (hectare scale) areas.

  12. Investigation of physico-chemical processes in lithium-ion batteries by deconvolution of electrochemical impedance spectra

    Science.gov (United States)

    Manikandan, Balasundaram; Ramar, Vishwanathan; Yap, Christopher; Balaya, Palani

    2017-09-01

    The individual physico-chemical processes in lithium-ion batteries namely solid-state diffusion and charge transfer polarization are difficult to be tracked by impedance spectroscopy due to simultaneous contributions from cathode and anode. A deeper understanding of various polarization processes in lithium-ion batteries is important to enhance storage performance and cycle life. In this context, the polarization processes occurring in cylindrical 18650 cells comprising different cathodes against graphite anode (LiNi0.2Mn0.2Co0.6O2vs. graphite; LiNi0.6Mn0.2Co0.2O2vs. graphite; LiNi0.8Co0.15Al0.05O2vs. graphite and LiFePO4vs. graphite) are investigated by deconvolution of impedance spectra across various states of charge. Further, cathodes and anodes are extracted from the investigated 18650-type cells and tested in half-cells against Li-metal as well as in symmetric cell configurations to understand the contribution of cathode and anode to the full cells of various battery chemistries studied. Except for the LiFePO4vs. graphite cell, the polarization resistance in graphite of other cells are found to be higher than those of the investigated cathodes, proving that the polarization in lithium-ion battery is largely influenced by the graphitic anode. Furthermore, the charge transfer polarization resistance encountered by the cathodes investigated in this work is found to be a strong function of the states of charge.

  13. Transporter-Enzyme Interplay: Deconvoluting Effects of Hepatic Transporters and Enzymes on Drug Disposition Using Static and Dynamic Mechanistic Models.

    Science.gov (United States)

    Varma, Manthena V; El-Kattan, Ayman F

    2016-07-01

    A large body of evidence suggests hepatic uptake transporters, organic anion-transporting polypeptides (OATPs), are of high clinical relevance in determining the pharmacokinetics of substrate drugs, based on which recent regulatory guidances to industry recommend appropriate assessment of investigational drugs for the potential drug interactions. We recently proposed an extended clearance classification system (ECCS) framework in which the systemic clearance of class 1B and 3B drugs is likely determined by hepatic uptake. The ECCS framework therefore predicts the possibility of drug-drug interactions (DDIs) involving OATPs and the effects of genetic variants of SLCO1B1 early in the discovery and facilitates decision making in the candidate selection and progression. Although OATP-mediated uptake is often the rate-determining process in the hepatic clearance of substrate drugs, metabolic and/or biliary components also contribute to the overall hepatic disposition and, more importantly, to liver exposure. Clinical evidence suggests that alteration in biliary efflux transport or metabolic enzymes associated with genetic polymorphism leads to change in the pharmacodynamic response of statins, for which the pharmacological target resides in the liver. Perpetrator drugs may show inhibitory and/or induction effects on transporters and enzymes simultaneously. It is therefore important to adopt models that frame these multiple processes in a mechanistic sense for quantitative DDI predictions and to deconvolute the effects of individual processes on the plasma and hepatic exposure. In vitro data-informed mechanistic static and physiologically based pharmacokinetic models are proven useful in rationalizing and predicting transporter-mediated DDIs and the complex DDIs involving transporter-enzyme interplay.

  14. Matrix Quantization of Turbulence

    CERN Document Server

    Floratos, Emmanuel

    2011-01-01

    Based on our recent work on Quantum Nambu Mechanics $\\cite{af2}$, we provide an explicit quantization of the Lorenz chaotic attractor through the introduction of Non-commutative phase space coordinates as Hermitian $ N \\times N $ matrices in $ R^{3}$. For the volume preserving part, they satisfy the commutation relations induced by one of the two Nambu Hamiltonians, the second one generating a unique time evolution. Dissipation is incorporated quantum mechanically in a self-consistent way having the correct classical limit without the introduction of external degrees of freedom. Due to its volume phase space contraction it violates the quantum commutation relations. We demonstrate that the Heisenberg-Nambu evolution equations for the Matrix Lorenz system develop fast decoherence to N independent Lorenz attractors. On the other hand there is a weak dissipation regime, where the quantum mechanical properties of the volume preserving non-dissipative sector survive for long times.

  15. Matrix Graph Grammars

    CERN Document Server

    Velasco, Pedro Pablo Perez

    2008-01-01

    This book objective is to develop an algebraization of graph grammars. Equivalently, we study graph dynamics. From the point of view of a computer scientist, graph grammars are a natural generalization of Chomsky grammars for which a purely algebraic approach does not exist up to now. A Chomsky (or string) grammar is, roughly speaking, a precise description of a formal language (which in essence is a set of strings). On a more discrete mathematical style, it can be said that graph grammars -- Matrix Graph Grammars in particular -- study dynamics of graphs. Ideally, this algebraization would enforce our understanding of grammars in general, providing new analysis techniques and generalizations of concepts, problems and results known so far.

  16. Matrix anticirculant calculus

    Science.gov (United States)

    Dimiev, Stancho; Stoev, Peter; Stoilova, Stanislava

    2013-12-01

    The notion of anticirculant is ordinary of interest for specialists of general algebra (to see for instance [1]). In this paper we develop some aspects of anticirculants in real function theory. Denoting by X≔x0+jx1+⋯+jmxm, xk∈R, m+1 = 2n, and jk is the k-th degree of the matrix j = (0100...00010...00001...0..................-1000...0), we study the functional anticirculants f(X)≔f0(x0,x1,...,xm)+jf1(x0,x1,...,xm)+⋯+jm-1fm-1(x0,x1,...,xm)+jmfm(x0,x1,...,xm), where fk(x0,x1,...,xm) are smooth functions of 2n real variables. A continuation for complex function theory will appear.

  17. Light cone matrix product

    Energy Technology Data Exchange (ETDEWEB)

    Hastings, Matthew B [Los Alamos National Laboratory

    2009-01-01

    We show how to combine the light-cone and matrix product algorithms to simulate quantum systems far from equilibrium for long times. For the case of the XXZ spin chain at {Delta} = 0.5, we simulate to a time of {approx} 22.5. While part of the long simulation time is due to the use of the light-cone method, we also describe a modification of the infinite time-evolving bond decimation algorithm with improved numerical stability, and we describe how to incorporate symmetry into this algorithm. While statistical sampling error means that we are not yet able to make a definite statement, the behavior of the simulation at long times indicates the appearance of either 'revivals' in the order parameter as predicted by Hastings and Levitov (e-print arXiv:0806.4283) or of a distinct shoulder in the decay of the order parameter.

  18. Matrix membranes and integrability

    Energy Technology Data Exchange (ETDEWEB)

    Zachos, C. [Argonne National Lab., IL (United States); Fairlie, D. [University of Durham (United Kingdom). Dept. of Mathematical Sciences; Curtright, T. [University of Miami, Coral Gables, FL (United States). Dept. of Physics

    1997-06-01

    This is a pedagogical digest of results reported in Curtright, Fairlie, {ampersand} Zachos 1997, and an explicit implementation of Euler`s construction for the solution of the Poisson Bracket dual Nahm equation. But it does not cover 9 and 10-dimensional systems, and subsequent progress on them Fairlie 1997. Cubic interactions are considered in 3 and 7 space dimensions, respectively, for bosonic membranes in Poisson Bracket form. Their symmetries and vacuum configurations are explored. Their associated first order equations are transformed to Nahm`s equations, and are hence seen to be integrable, for the 3-dimensional case, by virtue of the explicit Lax pair provided. Most constructions introduced also apply to matrix commutator or Moyal Bracket analogs.

  19. Spherical membranes in Matrix theory

    CERN Document Server

    Kabat, D; Kabat, Daniel; Taylor, Washington

    1998-01-01

    We consider membranes of spherical topology in uncompactified Matrix theory. In general for large membranes Matrix theory reproduces the classical membrane dynamics up to 1/N corrections; for certain simple membrane configurations, the equations of motion agree exactly at finite N. We derive a general formula for the one-loop Matrix potential between two finite-sized objects at large separations. Applied to a graviton interacting with a round spherical membrane, we show that the Matrix potential agrees with the naive supergravity potential for large N, but differs at subleading orders in N. The result is quite general: we prove a pair of theorems showing that for large N, after removing the effects of gravitational radiation, the one-loop potential between classical Matrix configurations agrees with the long-distance potential expected from supergravity. As a spherical membrane shrinks, it eventually becomes a black hole. This provides a natural framework to study Schwarzschild black holes in Matrix theory.

  20. Comment on ‘A novel method for fast and robust estimation of fluorescence decay dynamics using constrained least-square deconvolution with Laguerre expansion’

    Science.gov (United States)

    Zhang, Yongliang; Day-Uei Li, David

    2017-02-01

    This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.

  1. Linearized supergravity from Matrix theory

    CERN Document Server

    Kabat, D; Kabat, Daniel; Taylor, Washington

    1998-01-01

    We show that the linearized supergravity potential between two objects arising from the exchange of quanta with zero longitudinal momentum is reproduced to all orders in 1/r by terms in the one-loop Matrix theory potential. The essential ingredient in the proof is the identification of the Matrix theory quantities corresponding to moments of the stress tensor and membrane current. We also point out that finite-N Matrix theory violates the Equivalence Principle.

  2. Lectures on Matrix Field Theory

    Science.gov (United States)

    Ydri, Badis

    The subject of matrix field theory involves matrix models, noncommutative geometry, fuzzy physics and noncommutative field theory and their interplay. In these lectures, a lot of emphasis is placed on the matrix formulation of noncommutative and fuzzy spaces, and on the non-perturbative treatment of the corresponding field theories. In particular, the phase structure of noncommutative $\\phi^4$ theory is treated in great detail, and an introduction to noncommutative gauge theory is given.

  3. Matrix elements of unstable states

    CERN Document Server

    Bernard, V; Meißner, U -G; Rusetsky, A

    2012-01-01

    Using the language of non-relativistic effective Lagrangians, we formulate a systematic framework for the calculation of resonance matrix elements in lattice QCD. The generalization of the L\\"uscher-Lellouch formula for these matrix elements is derived. We further discuss in detail the procedure of the analytic continuation of the resonance matrix elements into the complex energy plane and investigate the infinite-volume limit.

  4. Enhancing an R-matrix

    CERN Document Server

    MacKaay, M A

    1996-01-01

    In order to construct a representation of the tangle category one needs an enhanced R-matrix. In this paper we define a sufficient and necessary condition for enhancement that can be checked easily for any R-matrix. If the R-matrix can be enhanced, we also show how to construct the additional data that define the enhancement. As a direct consequence we find a sufficient condition for the construction of a knot invariant.

  5. Matrix Models and Gravitational Corrections

    CERN Document Server

    Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine

    2002-01-01

    We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.

  6. A matrix model for WZW

    Energy Technology Data Exchange (ETDEWEB)

    Dorey, Nick [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom); Tong, David [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom); Department of Theoretical Physics, TIFR,Homi Bhabha Road, Mumbai 400 005 (India); Stanford Institute for Theoretical Physics,Via Pueblo, Stanford, CA 94305 (United States); Turner, Carl [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom)

    2016-08-01

    We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large N limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.

  7. A Matrix Model for WZW

    CERN Document Server

    Dorey, Nick; Turner, Carl

    2016-01-01

    We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large $N$ limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.

  8. Extended Matrix Variate Hypergeometric Functions and Matrix Variate Distributions

    Directory of Open Access Journals (Sweden)

    Daya K. Nagar

    2015-01-01

    Full Text Available Hypergeometric functions of matrix arguments occur frequently in multivariate statistical analysis. In this paper, we define and study extended forms of Gauss and confluent hypergeometric functions of matrix arguments and show that they occur naturally in statistical distribution theory.

  9. Matrix Product Operators, Matrix Product States, and ab initio Density Matrix Renormalization Group algorithms

    CERN Document Server

    Chan, Garnet Kin-Lic; Nakatani, Naoki; Li, Zhendong; White, Steven R

    2016-01-01

    Current descriptions of the ab initio DMRG algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab-initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational par...

  10. Rare earth elements (REEs) in the tropical South Atlantic and quantitative deconvolution of their non-conservative behavior

    Science.gov (United States)

    Zheng, Xin-Yuan; Plancherel, Yves; Saito, Mak A.; Scott, Peter M.; Henderson, Gideon M.

    2016-03-01

    mid-Atlantic ridge acts as a regional net sink for light REEs, but has little influence on the net budget of heavy REEs. The combination of dense REE measurements with water mass deconvolution is shown to provide quantitative assessment of the relative roles of physical and biogeochemical processes in the oceanic cycling of REEs.

  11. Blind Image Deconvolution for Defocus Blurred Image%离焦模糊图像的盲复原算法

    Institute of Scientific and Technical Information of China (English)

    孙韶杰; 吴琼; 李国辉

    2011-01-01

    An algorithm of blind image deconvolution is proposed for defocus blurred images.Firstly, Hough transform is applied to detect the line edges in the defocus image.Then the step-edges or approximate step-edges are located, based on the image spatial statistical characteristic and the modified Grubbs method.The line spread function is calculated by using the detected step-edges, and the radius of the defocus blur is obtained by adopting the relationship between the radius and the line spread function.Finally the defocus image is restored by Wiener filter method.Tested on the real defocus blurred photographs, the experimental results show that the proposed algorithm can detect the step-edges or approximate step-edges accurately, and improve the identification precision of blur radius and the quality of restored images.The algorithm has been applied in the practical detection forensics work successfully.%针对离焦模糊图像,提出了一种盲复原算法.该算法首先利用Hough变换检测出离焦图像中的直线边缘,然后基于图像的空域统计特性和修正的Grubbs检验法,定位出阶跃或近似阶跃直线边缘,在此基础上自适应计算出线扩散函数,最后利用线扩散函数求取离焦模糊半径,进而用Wiener滤波完成了图像的复原.实验结果表明,对真实的离焦模糊图像,该算法能够准确地检测和定位出阶跃或近似阶跃边缘,提高离焦模糊半径的鉴别精度和图像的复原效果,已在实际刑侦取证工作中获得较为成功的应用.

  12. Ceramic matrix composite article and process of fabricating a ceramic matrix composite article

    Science.gov (United States)

    Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert

    2016-01-12

    A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.

  13. Matrix Theory of pp Waves

    CERN Document Server

    Michelson, J

    2004-01-01

    The Matrix Theory that has been proposed for various pp wave backgrounds is discussed. Particular emphasis is on the existence of novel nontrivial supersymmetric solutions of the Matrix Theory. These correspond to branes of various shapes (ellipsoidal, paraboloidal, and possibly hyperboloidal) that are unexpected from previous studies of branes in pp wave geometries.

  14. How to Study a Matrix

    Science.gov (United States)

    Jairam, Dharmananda; Kiewra, Kenneth A.; Kauffman, Douglas F.; Zhao, Ruomeng

    2012-01-01

    This study investigated how best to study a matrix. Fifty-three participants studied a matrix topically (1 column at a time), categorically (1 row at a time), or in a unified way (all at once). Results revealed that categorical and unified study produced higher: (a) performance on relationship and fact tests, (b) study material satisfaction, and…

  15. An improved predictive deconvolution based on maximization of non-Gaussianity%一种改进的基于非高斯性最大化的预测反褶积算法

    Institute of Scientific and Technical Information of China (English)

    刘军; 陆文凯

    2008-01-01

    The predictive deconvolution algorithm (PD), which is based on second-order statistics, assumes that the primaries and the multiples are implicitly orthogonal. However,the seismic data usually do not satisfy this assumption in practice. Since the seismic data (primaries and multiples) have a non-Gaussian distribution, in this paper we present an improved predictive deconvolution algorithm (IPD) by maximizing the non-Gaussianity of the recovered primaries. Applications of the IPD method on synthetic and real seismic datasets show that the proposed method obtains promising results.

  16. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    Science.gov (United States)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  17. Improvement of depth resolution of ADF-SCEM by deconvolution: effects of electron energy loss and chromatic aberration on depth resolution.

    Science.gov (United States)

    Zhang, Xiaobin; Takeguchi, Masaki; Hashimoto, Ayako; Mitsuishi, Kazutaka; Tezuka, Meguru; Shimojo, Masayuki

    2012-06-01

    Scanning confocal electron microscopy (SCEM) is a new imaging technique that is capable of depth sectioning with nanometer-scale depth resolution. However, the depth resolution in the optical axis direction (Z) is worse than might be expected on the basis of the vertical electron probe size calculated with the existence of spherical aberration. To investigate the origin of the degradation, the effects of electron energy loss and chromatic aberration on the depth resolution of annular dark-field SCEM were studied through both experiments and computational simulations. The simulation results obtained by taking these two factors into consideration coincided well with those obtained by experiments, which proved that electron energy loss and chromatic aberration cause blurs at the overfocus sides of the Z-direction intensity profiles rather than degrade the depth resolution much. In addition, a deconvolution method using a simulated point spread function, which combined two Gaussian functions, was adopted to process the XZ-slice images obtained both from experiments and simulations. As a result, the blurs induced by energy loss and chromatic aberration were successfully removed, and there was also about 30% improvement in the depth resolution in deconvoluting the experimental XZ-slice image.

  18. Deconvolution of overlapping spectral polymer signals in size exclusion separation-diode array detection separations by implementing a multivariate curve resolution method optimized by alternating least square.

    Science.gov (United States)

    Van Hoeylandt, Tim; Chen, Kai; Du Prez, Filip; Lynen, Frédéric

    2014-05-16

    Peaks eluting from a size exclusion separation (SEC) are often not completely baseline-separated due to the inherent dispersity of the polymer. Lowering the flow rate is sometimes a solution to obtain a better physical separation, but results in a longer retention time, which is often not desirable. The chemometrical deconvolution method discussed in this work provides the possibility of calculating the contribution of each peak separately in the total chromatogram of overlapping peaks. An in-house-developed MATLAB script differentiates between compounds based on their difference in UV-spectrum and retention time, using the entire 3D retention time UV-spectrum. Consequently, the output of the script offers the calculated chromatograms of the separate compounds as well as their respective UV-spectrum, of which the latter can be used for peak identification. This approach is of interest to quantitate contributions of different polymer types with overlapping UV-spectra and retention times, as is often the case in, for example, copolymer or polymer blend analysis. The applicability has been proven on mixtures of different polymer types: polystyrene, poly(methyl methacrylate) and poly(ethoxyethyl acrylate). This paper demonstrates that both qualitative and quantitative analyses are possible after deconvolution and that alternating concentrations of adjacent peaks do not significantly influence the obtained accuracy.

  19. Machining of Metal Matrix Composites

    CERN Document Server

    2012-01-01

    Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...

  20. Matrix Model Approach to Cosmology

    CERN Document Server

    Chaney, A; Stern, A

    2015-01-01

    We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...

  1. Matrix convolution operators on groups

    CERN Document Server

    Chu, Cho-Ho

    2008-01-01

    In the last decade, convolution operators of matrix functions have received unusual attention due to their diverse applications. This monograph presents some new developments in the spectral theory of these operators. The setting is the Lp spaces of matrix-valued functions on locally compact groups. The focus is on the spectra and eigenspaces of convolution operators on these spaces, defined by matrix-valued measures. Among various spectral results, the L2-spectrum of such an operator is completely determined and as an application, the spectrum of a discrete Laplacian on a homogeneous graph is computed using this result. The contractivity properties of matrix convolution semigroups are studied and applications to harmonic functions on Lie groups and Riemannian symmetric spaces are discussed. An interesting feature is the presence of Jordan algebraic structures in matrix-harmonic functions.

  2. Bayesian Interpolation and Deconvolution

    Science.gov (United States)

    1992-07-01

    OBSOLETE DESTRUCTION NOTICE FOR CLASSIFIED DOCUMKETS, FOLLOW THE PROCEDURES IN DoD 5200.22-N, INDUSTRIAL SECURITY MANUAL , SECTION 11-19 OR DoD 5200.1-R...0lgrtmo /11, &() S,) A hneS n rn h aiu orsod oicu ig9%o h oa rhiiy r-ecivl evrthn rmsd of tecno rlbld9iirlva .rh cos ysp ed soi l - S Ss irrI ieful

  3. Unsupervised Blind Deconvolution

    Science.gov (United States)

    2013-09-01

    method of Marsaglia and Tsang [23]. Ten thousand samples of the same size as a given data set were generated. The p-values were averaged for all...in adaptive-optics images," PASP 120,1132-1143 (2008). [23] G. Marsaglia and W. Tsang, "A simple method for generating gamma variables," ACM

  4. Deconvolution of ultrasound images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1992-01-01

    Based on physical models, it is indicated that the received pressure field in ultrasound B-mode images can be described by a convolution between a tissue reflection signal and the emitted pressure field. This result is used in a description of current image formation and in formulating a new...... processing scheme. The suggested estimator can take into account the dispersive attenuation, the temporal and spatial variation of the pulse, and the change in reflection strength and signal-to-noise ratio. Details of the algorithm and the estimation of parameters to be used are given. The performance...

  5. Manufacturing Titanium Metal Matrix Composites by Consolidating Matrix Coated Fibres

    Institute of Scientific and Technical Information of China (English)

    Hua-Xin PENG

    2005-01-01

    Titanium metal matrix composites (TiMMCs) reinforced by continuous silicon carbide fibres are being developed for aerospace applications. TiMMCs manufactured by the consolidation of matrix-coated fibre (MCF) method offer optimum properties because of the resulting uniform fibre distribution, minimum fibre damage and fibre volume fraction control. In this paper, the consolidation of Ti-6Al-4V matrix-coated SiC fibres during vacuum hot pressing has been investigated. Experiments were carried out on multi-ply MCFs under vacuum hot pressing (VHP). In contrast to most of existing studies, the fibre arrangement has been carefully controlled either in square or hexagonal arraysthroughout the consolidated sample. This has enabled the dynamic consolidation behaviour of MCFs to be demonstrated by eliminating the fibre re-arrangement during the VHP process. The microstructural evolution of the matrix coating was reported and the deformation mechanisms involved were discussed.

  6. New pole placement algorithm - Polynomial matrix approach

    Science.gov (United States)

    Shafai, B.; Keel, L. H.

    1990-01-01

    A simple and direct pole-placement algorithm is introduced for dynamical systems having a block companion matrix A. The algorithm utilizes well-established properties of matrix polynomials. Pole placement is achieved by appropriately assigning coefficient matrices of the corresponding matrix polynomial. This involves only matrix additions and multiplications without requiring matrix inversion. A numerical example is given for the purpose of illustration.

  7. High temperature polymer matrix composites

    Science.gov (United States)

    Serafini, Tito T. (Editor)

    1987-01-01

    These are the proceedings of the High Temperature Polymer Matrix Composites Conference held at the NASA Lewis Research Center on March 16 to 18, 1983. The purpose of the conference is to provide scientists and engineers working in the field of high temperature polymer matrix composites an opportunity to review, exchange, and assess the latest developments in this rapidly expanding area of materials technology. Technical papers are presented in the following areas: (1) matrix development; (2) adhesive development; (3) Characterization; (4) environmental effects; and (5) applications.

  8. Matrix Elements for Hylleraas CI

    Science.gov (United States)

    Harris, Frank E.

    The limitation to at most a single interelectron distance in individual configurations of a Hylleraas-type multiconfiguration wave function restricts significantly the types of integrals occurring in matrix elements for energy calculations, but even then if the formulation is not handled efficiently the angular parts of these integrals escalate to create expressions of great complexity. This presentation reviews ways in which the angular-momentum calculus can be employed to systematize and simplify the matrix element formulas, particularly those for the kinetic-energy matrix elements.

  9. Explaining the variability of Photochemical Reflectance Index (PRI): deconvolution of variability related to Light Use Efficiency and Canopy attributes.

    Science.gov (United States)

    Merlier, Elodie; Hmimina, Gabriel; Dufrêne, Eric; Soudani, Kamel

    2014-05-01

    The Photochemical Reflectance Index (PRI) was designed as a proxy of the state of xanthophyll cycle which is used as a response of plants to excess of light (Gamon et al., 1990; 1992). Strong relationships between PRI and LUE were shown at leaf and canopy scales and over a wide range of species (Garbulsky et al., 2011). However, its use at canopy scale was shown to be significantly hampered by effects of confounding factors such as the PRI sensitivity to leaf pigment content (Gamon et al. 2001; Nakaji et al. 2006) and to canopy structure (Hilker et al. 2008). Several approaches aimed at correcting such effects and recent works focused on the deconvolution of LUE related and LUE unrelated PRI variability (Rahimzadeh-Bajgiran et al. 2012).In this study, the PRI variability at canopy scale is investigated over two years on three species (Fagus sylvatica, Quercus robur and Pinus sylvestris) growing under two water regimes. At daily scale, PRI variability is mainly explained by radiation conditions. As already reported at leaf scale in Hmimina et al. (2014), analysis of PRI responses to incoming photosynthetically active radiation over seasonal scale allowed to separate two sources of variability : a constitutive variability mainly related to canopy structure and leaf chlorophyll content and a facultative variability mainly related to LUE and soil moisture content. These results highlight the composite nature of PRI signal measured at canopy scale and the importance of disentangling its sources of variability in order to accurately assess ecosystem light use efficiency. Gamon JA, Field CB, Bilger W, Björkman O, Fredeen AL, Peñuelas J. 1990. Remote sensing of the xanthophyll cycle and chlorophyll fluorescence in sunflower leaves and canopies. Oecologia 85, 1-7. Gamon JA, Field CB, Fredeen A AL, Thayer S. 2001. Assessing photosynthetic downregulation in sunflower stands with an optically-based model. Photosynthesis Research 67, 113-125. Gamon JA, Peñuelas J, Field CB

  10. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  11. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms.

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  12. GoM Diet Matrix

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Gulf of Maine diet matrix was developed for the EMAX exercise described in that center...

  13. Linear connections on matrix geometries

    CERN Document Server

    Madore, J; Mourad, J; Madore, John; Masson, Thierry; Mourad, Jihad

    1994-01-01

    A general definition of a linear connection in noncommutative geometry has been recently proposed. Two examples are given of linear connections in noncommutative geometries which are based on matrix algebras. They both possess a unique metric connection.

  14. Matrix Quantum Mechanics from Qubits

    CERN Document Server

    Hartnoll, Sean A; Mazenc, Edward A

    2016-01-01

    We introduce a transverse field Ising model with order N^2 spins interacting via a nonlocal quartic interaction. The model has an O(N,Z), hyperoctahedral, symmetry. We show that the large N partition function admits a saddle point in which the symmetry is enhanced to O(N). We further demonstrate that this `matrix saddle' correctly computes large N observables at weak and strong coupling. The matrix saddle undergoes a continuous quantum phase transition at intermediate couplings. At the transition the matrix eigenvalue distribution becomes disconnected. The critical excitations are described by large N matrix quantum mechanics. At the critical point, the low energy excitations are waves propagating in an emergent 1+1 dimensional spacetime.

  15. The R-matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Descouvemont, P; Baye, D [Physique Nucleaire Theorique et Physique Mathematique, C.P. 229, Universite Libre de Bruxelles (ULB), B 1050 Brussels (Belgium)], E-mail: pdesc@ulb.ac.be, E-mail: dbaye@ulb.ac.be

    2010-03-15

    The different facets of the R-matrix method are presented pedagogically in a general framework. Two variants have been developed over the years: (i) The 'calculable' R-matrix method is a calculational tool to derive scattering properties from the Schroedinger equation in a large variety of physical problems. It was developed rather independently in atomic and nuclear physics with too little mutual influence. (ii) The 'phenomenological' R-matrix method is a technique to parametrize various types of cross sections. It was mainly (or uniquely) used in nuclear physics. Both directions are explained by starting from the simple problem of scattering by a potential. They are illustrated by simple examples in nuclear and atomic physics. In addition to elastic scattering, the R-matrix formalism is applied to inelastic and radiative-capture reactions. We also present more recent and more ambitious applications of the theory in nuclear physics.

  16. Determination of the source time function of seismic event by blind deconvolution of regional coda wavefield: application to December 22, 2009 explosion at Kambarata, Kyrgyzstan

    Science.gov (United States)

    Sebe, O. G.; Guilbert, J.; Bard, P.

    2011-12-01

    At regional distance, recovering the source time function of a seismic event is a rather difficult task as the Green function is unknown due to large scattering of the waves by crust heterogeneities. Contrary to classical methods based on deterministic assessment of the Green function, this work proposes to exploit the stochastic nature of regional coda wavefield in order to extract the seismic source time function of a regional event. Since the work of Aki and Chouet 1975, it is well recognized that regional coda waves can provide stable and robust information on the source of seismic events. Unfortunately, all the proposed techniques are limited to the power spectral density of the seismic source function. A modified version of our two step spectral factorization algorithm [Sèbe et al. 2005] of coda waves has been proposed in order to include higher order statistic (HOS) blind deconvolution techniques. Assuming that the coda excitation time series is a non-Gaussian independent and identically distributed random signal, the higher order statistics, especially the tricorrelation, is able to remove the randomness of coda excitation and extract source properties. In addition, unlike classical second order approach which only provides the power spectral density, the tricorrelation keeps the information on the phase spectrum of the source, allowing the estimation of the source time function. This original blind deconvolution algorithm of coda waves has been applied on the regional records of the December 22, 2009 explosion in Kambara, Kyrgyzstan. Based on statistic analyses of the higher order cumulants, this method has been able to recover the main properties of the source time function of this detonation: two successive explosions have been identified with a time delay of about 1.7 sec and an amplitude ratio of about 2 in favour of second explosion. This successful blind recovering of high resolution source properties is an encouraging result toward the development

  17. Matrix analysis of electrical machinery

    CERN Document Server

    Hancock, N N

    2013-01-01

    Matrix Analysis of Electrical Machinery, Second Edition is a 14-chapter edition that covers the systematic analysis of electrical machinery performance. This edition discusses the principles of various mathematical operations and their application to electrical machinery performance calculations. The introductory chapters deal with the matrix representation of algebraic equations and their application to static electrical networks. The following chapters describe the fundamentals of different transformers and rotating machines and present torque analysis in terms of the currents based on the p

  18. SVD row or column symmetric matrix

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new architecture for row or column symmetric matrix called extended matrix is defined, and a precise correspondence of the singular values and singular vectors between the extended matrix and its original (namely, the mother matrix) is derived. As an illustration of potential, we show that, for a class of extended matrices, the singular value decomposition using the mother matrix rather than the extended matrix per se can save the CPU time and memory without loss of numerical precision.

  19. Minimal Realizations of Supersymmetry for Matrix Hamiltonians

    CERN Document Server

    Andrianov, Alexandr A

    2014-01-01

    The notions of weak and strong minimizability of a matrix intertwining operator are introduced. Criterion of strong minimizability of a matrix intertwining operator is revealed. Criterion and sufficient condition of existence of a constant symmetry matrix for a matrix Hamiltonian are presented. A method of constructing of a matrix Hamiltonian with a given constant symmetry matrix in terms of a set of arbitrary scalar functions and eigen- and associated vectors of this matrix is offered. Examples of constructing of $2\\times2$ matrix Hamiltonians with given symmetry matrices for the cases of different structure of Jordan form of these matrices are elucidated.

  20. Sparse Planar Array Synthesis Using Matrix Enhancement and Matrix Pencil

    Directory of Open Access Journals (Sweden)

    Mei-yan Zheng

    2013-01-01

    Full Text Available The matrix enhancement and matrix pencil (MEMP plays important roles in modern signal processing applications. In this paper, MEMP is applied to attack the problem of two-dimensional sparse array synthesis. Firstly, the desired array radiation pattern, as the original pattern for approximating, is sampled to form an enhanced matrix. After performing the singular value decomposition (SVD and discarding the insignificant singular values according to the prior approximate error, the minimum number of elements can be obtained. Secondly, in order to obtain the eigenvalues, the generalized eigen-decomposition is employed on the approximate matrix, which is the optimal low-rank approximation of the enhanced matrix corresponding to sparse planar array, and then the ESPRIT algorithm is utilized to pair the eigenvalues related to each dimension of the planar array. Finally, element positions and excitations of the sparse planar array are calculated according to the correct pairing of eigenvalues. Simulation results are presented to illustrate the effectiveness of the proposed approach.