WorldWideScience

Sample records for statistical expression deconvolution

  1. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  2. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  3. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  4. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  5. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  6. Gene Expression Deconvolution for Uncovering Molecular Signatures in Response to Therapy in Juvenile Idiopathic Arthritis.

    Directory of Open Access Journals (Sweden)

    Ang Cui

    Full Text Available Gene expression-based signatures help identify pathways relevant to diseases and treatments, but are challenging to construct when there is a diversity of disease mechanisms and treatments in patients with complex diseases. To overcome this challenge, we present a new application of an in silico gene expression deconvolution method, ISOpure-S1, and apply it to identify a common gene expression signature corresponding to response to treatment in 33 juvenile idiopathic arthritis (JIA patients. Using pre- and post-treatment gene expression profiles only, we found a gene expression signature that significantly correlated with a reduction in the number of joints with active arthritis, a measure of clinical outcome (Spearman rho = 0.44, p = 0.040, Bonferroni correction. This signature may be associated with a decrease in T-cells, monocytes, neutrophils and platelets. The products of most differentially expressed genes include known biomarkers for JIA such as major histocompatibility complexes and interleukins, as well as novel biomarkers including α-defensins. This method is readily applicable to expression datasets of other complex diseases to uncover shared mechanistic patterns in heterogeneous samples.

  7. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  8. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  9. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  10. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  11. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  12. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  13. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  14. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  15. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  16. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  17. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  18. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  19. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  20. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  1. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  2. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  3. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  4. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  5. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  6. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  7. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  8. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  9. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  10. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  11. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  12. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  13. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  14. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  15. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  16. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  17. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  18. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  19. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  20. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  1. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  2. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  3. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  4. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  5. Two-stage, in silico deconvolution of the lymphocyte compartment of the peripheral whole blood transcriptome in the context of acute kidney allograft rejection.

    Science.gov (United States)

    Shannon, Casey P; Balshaw, Robert; Ng, Raymond T; Wilson-McManus, Janet E; Keown, Paul; McMaster, Robert; McManus, Bruce M; Landsberg, David; Isbel, Nicole M; Knoll, Greg; Tebbutt, Scott J

    2014-01-01

    Acute rejection is a major complication of solid organ transplantation that prevents the long-term assimilation of the allograft. Various populations of lymphocytes are principal mediators of this process, infiltrating graft tissues and driving cell-mediated cytotoxicity. Understanding the lymphocyte-specific biology associated with rejection is therefore critical. Measuring genome-wide changes in transcript abundance in peripheral whole blood cells can deliver a comprehensive view of the status of the immune system. The heterogeneous nature of the tissue significantly affects the sensitivity and interpretability of traditional analyses, however. Experimental separation of cell types is an obvious solution, but is often impractical and, more worrying, may affect expression, leading to spurious results. Statistical deconvolution of the cell type-specific signal is an attractive alternative, but existing approaches still present some challenges, particularly in a clinical research setting. Obtaining time-matched sample composition to biologically interesting, phenotypically homogeneous cell sub-populations is costly and adds significant complexity to study design. We used a two-stage, in silico deconvolution approach that first predicts sample composition to biologically meaningful and homogeneous leukocyte sub-populations, and then performs cell type-specific differential expression analysis in these same sub-populations, from peripheral whole blood expression data. We applied this approach to a peripheral whole blood expression study of kidney allograft rejection. The patterns of differential composition uncovered are consistent with previous studies carried out using flow cytometry and provide a relevant biological context when interpreting cell type-specific differential expression results. We identified cell type-specific differential expression in a variety of leukocyte sub-populations at the time of rejection. The tissue-specificity of these differentially

  6. Two-stage, in silico deconvolution of the lymphocyte compartment of the peripheral whole blood transcriptome in the context of acute kidney allograft rejection.

    Directory of Open Access Journals (Sweden)

    Casey P Shannon

    Full Text Available Acute rejection is a major complication of solid organ transplantation that prevents the long-term assimilation of the allograft. Various populations of lymphocytes are principal mediators of this process, infiltrating graft tissues and driving cell-mediated cytotoxicity. Understanding the lymphocyte-specific biology associated with rejection is therefore critical. Measuring genome-wide changes in transcript abundance in peripheral whole blood cells can deliver a comprehensive view of the status of the immune system. The heterogeneous nature of the tissue significantly affects the sensitivity and interpretability of traditional analyses, however. Experimental separation of cell types is an obvious solution, but is often impractical and, more worrying, may affect expression, leading to spurious results. Statistical deconvolution of the cell type-specific signal is an attractive alternative, but existing approaches still present some challenges, particularly in a clinical research setting. Obtaining time-matched sample composition to biologically interesting, phenotypically homogeneous cell sub-populations is costly and adds significant complexity to study design. We used a two-stage, in silico deconvolution approach that first predicts sample composition to biologically meaningful and homogeneous leukocyte sub-populations, and then performs cell type-specific differential expression analysis in these same sub-populations, from peripheral whole blood expression data. We applied this approach to a peripheral whole blood expression study of kidney allograft rejection. The patterns of differential composition uncovered are consistent with previous studies carried out using flow cytometry and provide a relevant biological context when interpreting cell type-specific differential expression results. We identified cell type-specific differential expression in a variety of leukocyte sub-populations at the time of rejection. The tissue-specificity of

  7. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  8. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  9. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  10. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  11. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  12. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  13. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  14. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  15. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  16. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  17. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  18. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  19. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  20. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  1. A HOS-based blind deconvolution algorithm for the improvement of time resolution of mixed phase low SNR seismic data

    International Nuclear Information System (INIS)

    Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M

    2009-01-01

    A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six

  2. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  3. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  4. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  5. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  6. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  7. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  8. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  9. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  10. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  11. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  12. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  13. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  14. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  15. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  16. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  17. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  18. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  19. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  20. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  1. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  3. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  4. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  5. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  7. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  8. Utilization of the statistics techniques for the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger electronic spectra's deconvolutions

    International Nuclear Information System (INIS)

    Puentes, M.B.

    1987-01-01

    For the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger spectra, it is important to performe the peaks' separation and estimate its intensity. For this purpose, a methodology was implemented, including: a spectrum's filter; b) substraction of the base line (or inelastic background); c) deconvolution (separation of the distribution that integrates the spectrum) and d) error of calculation of the mean estimation, comprising adjustment quality tests. A software (FORTRAN IV plus) that permits to use the methodology proposed from the experimental spectra was implemented. The quality of the methodology was tested with simulated spectra. (Author) [es

  9. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  10. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  11. Increasing the darkfield contrast-to-noise ratio using a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging.

    Science.gov (United States)

    Weber, Thomas; Pelzer, Georg; Bayer, Florian; Horn, Florian; Rieger, Jens; Ritter, André; Zang, Andrea; Durst, Jürgen; Anton, Gisela; Michel, Thilo

    2013-07-29

    A novel information retrieval algorithm for X-ray grating-based phase-contrast imaging based on the deconvolution of the object and the reference phase stepping curve (PSC) as proposed by Modregger et al. was investigated in this paper. We applied the method for the first time on data obtained with a polychromatic spectrum and compared the results to those, received by applying the commonly used method, based on a Fourier analysis. We confirmed the expectation, that both methods deliver the same results for the absorption and the differential phase image. For the darkfield image, a mean contrast-to-noise ratio (CNR) increase by a factor of 1.17 using the new method was found. Furthermore, the dose saving potential was estimated for the deconvolution method experimentally. It is found, that for the conventional method a dose which is higher by a factor of 1.66 is needed to obtain a similar CNR value compared to the novel method. A further analysis of the data revealed, that the improvement in CNR and dose efficiency is due to the superior background noise properties of the deconvolution method, but at the cost of comparability between measurements at different applied dose values, as the mean value becomes dependent on the photon statistics used.

  12. Digital sorting of complex tissues for cell type-specific gene expression profiles.

    Science.gov (United States)

    Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong

    2013-03-07

    Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.

  13. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  14. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  15. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  16. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  17. Statistically sound evaluation of trace element depth profiles by ion beam analysis

    International Nuclear Information System (INIS)

    Schmid, K.; Toussaint, U. von

    2012-01-01

    This paper presents the underlying physics and statistical models that are used in the newly developed program NRADC for fully automated deconvolution of trace level impurity depth profiles from ion beam data. The program applies Bayesian statistics to find the most probable depth profile given ion beam data measured at different energies and angles for a single sample. Limiting the analysis to % level amounts of material allows one to linearize the forward calculation of ion beam data which greatly improves the computation speed. This allows for the first time to apply the maximum likelihood approach to both the fitting of the experimental data and the determination of confidence intervals of the depth profiles for real world applications. The different steps during the automated deconvolution will be exemplified by applying the program to artificial and real experimental data.

  18. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  19. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  20. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  1. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  2. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  3. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  4. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  5. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  6. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  7. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  8. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  9. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  10. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  11. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  12. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  13. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  14. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  15. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  16. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  17. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  18. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  19. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  20. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  1. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  2. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  3. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  4. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  5. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  6. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  7. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  8. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  9. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  10. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  11. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  12. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  13. Comparison of small n statistical tests of differential expression applied to microarrays

    Directory of Open Access Journals (Sweden)

    Lee Anna Y

    2009-02-01

    Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.

  14. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  15. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  16. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  17. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  18. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  19. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  20. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  1. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  2. Further statistical analysis for genome-wide expression evolution in primate brain/liver/fibroblast tissue

    Directory of Open Access Journals (Sweden)

    Gu Jianying

    2004-05-01

    Full Text Available Abstract In spite of only a 1-2 per cent genomic DNA sequence difference, humans and chimpanzees differ considerably in behaviour and cognition. Affymetrix microarray technology provides a novel approach to addressing a long-term debate on whether the difference between humans and chimpanzees results from the alteration of gene expressions. Here, we used several statistical methods (distance method, two-sample t-tests, regularised t-tests, ANOVA and bootstrapping to detect the differential expression pattern between humans and great apes. Our analysis shows that the pattern we observed before is robust against various statistical methods; that is, the pronounced expression changes occurred on the human lineage after the split from chimpanzees, and that the dramatic brain expression alterations in humans may be mainly driven by a set of genes with increased expression (up-regulated rather than decreased expression (down-regulated.

  3. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  4. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  5. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  6. Some statistical properties of gene expression clustering for array data

    DEFF Research Database (Denmark)

    Abreu, G C G; Pinheiro, A; Drummond, R D

    2010-01-01

    DNA array data without a corresponding statistical error measure. We propose an easy-to-implement and simple-to-use technique that uses bootstrap re-sampling to evaluate the statistical error of the nodes provided by SOM-based clustering. Comparisons between SOM and parametric clustering are presented...... for simulated as well as for two real data sets. We also implement a bootstrap-based pre-processing procedure for SOM, that improves the false discovery ratio of differentially expressed genes. Code in Matlab is freely available, as well as some supplementary material, at the following address: https...

  7. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  8. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  9. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  10. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  11. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  12. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  13. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  14. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  15. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  16. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  17. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  18. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  19. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  20. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  1. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  2. Application of blind deconvolution with crest factor for recovery of original rolling element bearing defect signals

    International Nuclear Information System (INIS)

    Son, J. D.; Yang, B. S.; Tan, A. C. C.; Mathew, J.

    2004-01-01

    Many machine failures are not detected well in advance due to the masking of background noise and attenuation of the source signal through the transmission mediums. Advanced signal processing techniques using adaptive filters and higher order statistics have been attempted to extract the source signal from the measured data at the machine surface. In this paper, blind deconvolution using the Eigenvector Algorithm (EVA) technique is used to recover a damaged bearing signal using only the measured signal at the machine surface. A damaged bearing signal corrupted by noise with varying signal-to-noise (s/n) was used to determine the effectiveness of the technique in detecting an incipient signal and the optimum choice of filter length. The results show that the technique is effective in detecting the source signal with an s/n ratio as low as 0.21, but requires a relatively large filter length

  3. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  4. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  5. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  6. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  7. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  8. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  9. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  10. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  11. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  12. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  13. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  14. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  15. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  16. GeneTrailExpress: a web-based pipeline for the statistical evaluation of microarray experiments

    Directory of Open Access Journals (Sweden)

    Kohlbacher Oliver

    2008-12-01

    Full Text Available Abstract Background High-throughput methods that allow for measuring the expression of thousands of genes or proteins simultaneously have opened new avenues for studying biochemical processes. While the noisiness of the data necessitates an extensive pre-processing of the raw data, the high dimensionality requires effective statistical analysis methods that facilitate the identification of crucial biological features and relations. For these reasons, the evaluation and interpretation of expression data is a complex, labor-intensive multi-step process. While a variety of tools for normalizing, analysing, or visualizing expression profiles has been developed in the last years, most of these tools offer only functionality for accomplishing certain steps of the evaluation pipeline. Results Here, we present a web-based toolbox that provides rich functionality for all steps of the evaluation pipeline. Our tool GeneTrailExpress offers besides standard normalization procedures powerful statistical analysis methods for studying a large variety of biological categories and pathways. Furthermore, an integrated graph visualization tool, BiNA, enables the user to draw the relevant biological pathways applying cutting-edge graph-layout algorithms. Conclusion Our gene expression toolbox with its interactive visualization of the pathways and the expression values projected onto the nodes will simplify the analysis and interpretation of biochemical pathways considerably.

  17. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  18. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  19. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  20. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  1. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  2. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  3. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  4. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  5. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  6. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  7. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Brett [Institute of Biomedical Studies, Baylor University, Waco, TX 76798 (United States); Neumann, Elizabeth K. [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States); Stow, Sarah M.; May, Jody C.; McLean, John A. [Department of Chemistry, Vanderbilt University, Nashville, TN 37235 (United States); Vanderbilt Institute of Chemical Biology, Nashville, TN 37235 (United States); Vanderbilt Institute for Integrative Biosystems Research and Education, Nashville, TN 37235 (United States); Center for Innovative Technology, Nashville, TN 37235 (United States); Solouki, Touradj, E-mail: Touradj_Solouki@baylor.edu [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States)

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.{sub 8} Å{sup 2}, 295.{sub 1} Å{sup 2}, 296.{sub 8} Å{sup 2}, and 300.{sub 1} Å{sup 2}; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  8. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    International Nuclear Information System (INIS)

    Harper, Brett; Neumann, Elizabeth K.; Stow, Sarah M.; May, Jody C.; McLean, John A.; Solouki, Touradj

    2016-01-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288._8 Å"2, 295._1 Å"2, 296._8 Å"2, and 300._1 Å"2; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  9. Network statistics of genetically-driven gene co-expression modules in mouse crosses

    Directory of Open Access Journals (Sweden)

    Marie-Pier eScott-Boyer

    2013-12-01

    Full Text Available In biology, networks are used in different contexts as ways to represent relationships between entities, such as for instance interactions between genes, proteins or metabolites. Despite progress in the analysis of such networks and their potential to better understand the collective impact of genes on complex traits, one remaining challenge is to establish the biologic validity of gene co-expression networks and to determine what governs their organization. We used WGCNA to construct and analyze seven gene expression datasets from several tissues of mouse recombinant inbred strains (RIS. For six out of the 7 networks, we found that linkage to module QTLs (mQTLs could be established for 29.3% of gene co-expression modules detected in the several mouse RIS. For about 74.6% of such genetically-linked modules, the mQTL was on the same chromosome as the one contributing most genes to the module, with genes originating from that chromosome showing higher connectivity than other genes in the modules. Such modules (that we considered as genetically-driven had network statistic properties (density, centralization and heterogeneity that set them apart from other modules in the network. Altogether, a sizeable portion of gene co-expression modules detected in mouse RIS panels had genetic determinants as their main organizing principle. In addition to providing a biologic interpretation validation for these modules, these genetic determinants imparted on them particular properties that set them apart from other modules in the network, to the point that they can be predicted to a large extent on the basis of their network statistics.

  10. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  11. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  12. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  13. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  14. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    Science.gov (United States)

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  15. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  16. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  17. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  18. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  19. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  20. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  1. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  2. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  3. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  4. Analysis of low-pass filters for approximate deconvolution closure modelling in one-dimensional decaying Burgers turbulence

    Science.gov (United States)

    San, O.

    2016-01-01

    The idea of spatial filtering is central in approximate deconvolution large-eddy simulation (AD-LES) of turbulent flows. The need for low-pass filters naturally arises in the approximate deconvolution approach which is based solely on mathematical approximations by employing repeated filtering operators. Two families of low-pass spatial filters are studied in this paper: the Butterworth filters and the Padé filters. With a selection of various filtering parameters, variants of the AD-LES are systematically applied to the decaying Burgers turbulence problem, which is a standard prototype for more complex turbulent flows. Comparing with the direct numerical simulations, it is shown that all forms of the AD-LES approaches predict significantly better results than the under-resolved simulations at the same grid resolution. However, the results highly depend on the selection of the filtering procedure and the filter design. It is concluded that a complete attenuation for the smallest scales is crucial to prevent energy accumulation at the grid cut-off.

  5. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  6. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  7. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  8. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  9. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  10. Analysis of MultiWord Expression Translation Errors in Statistical Machine Translation

    DEFF Research Database (Denmark)

    Klyueva, Natalia; Liyanapathirana, Jeevanthi

    2015-01-01

    In this paper, we analyse the usage of multiword expressions (MWE) in Statistical Machine Translation (SMT). We exploit the Moses SMT toolkit to train models for French-English and Czech-Russian language pairs. For each language pair, two models were built: a baseline model without additional MWE...... data and the model enhanced with information on MWE. For the French-English pair, we tried three methods of introducing the MWE data. For Czech-Russian pair, we used just one method – adding automatically extracted data as a parallel corpus....

  11. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution

    Directory of Open Access Journals (Sweden)

    Kleinjans Jos

    2008-09-01

    Full Text Available Abstract Background In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. Results In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh, is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. Conclusion CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a

  12. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution.

    Science.gov (United States)

    Moretti, Stefano; van Leeuwen, Danitsja; Gmuender, Hans; Bonassi, Stefano; van Delft, Joost; Kleinjans, Jos; Patrone, Fioravante; Merlo, Domenico Franco

    2008-09-02

    In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that

  13. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  14. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  15. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    International Nuclear Information System (INIS)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Gibb, Andrew G.; Halpern, Mark; Marsden, Gaelen; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; France, Kevin; Gundersen, Joshua O.; Hughes, David H.; Martin, Peter G.; Olmi, Luca

    2011-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12 CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting

  16. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  17. Statistical Use of Argonaute Expression and RISC Assembly in microRNA Target Identification

    Science.gov (United States)

    Stanhope, Stephen A.; Sengupta, Srikumar; den Boon, Johan; Ahlquist, Paul; Newton, Michael A.

    2009-01-01

    MicroRNAs (miRNAs) posttranscriptionally regulate targeted messenger RNAs (mRNAs) by inducing cleavage or otherwise repressing their translation. We address the problem of detecting m/miRNA targeting relationships in homo sapiens from microarray data by developing statistical models that are motivated by the biological mechanisms used by miRNAs. The focus of our modeling is the construction, activity, and mediation of RNA-induced silencing complexes (RISCs) competent for targeted mRNA cleavage. We demonstrate that regression models accommodating RISC abundance and controlling for other mediating factors fit the expression profiles of known target pairs substantially better than models based on m/miRNA expressions alone, and lead to verifications of computational target pair predictions that are more sensitive than those based on marginal expression levels. Because our models are fully independent of exogenous results from sequence-based computational methods, they are appropriate for use as either a primary or secondary source of information regarding m/miRNA target pair relationships, especially in conjunction with high-throughput expression studies. PMID:19779550

  18. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  19. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  20. Statistical analysis of grapevine mortality associated with esca or Eutypa dieback foliar expression

    Directory of Open Access Journals (Sweden)

    Lucia GUERIN-DUBRANA

    2013-09-01

    Full Text Available Esca and Eutypa dieback are two major wood diseases of grapevine in France. Their widespread distribution in vineyards leads to vine decline and to a loss in productivity. However, little is known either about the temporal dynamics of these diseases at plant level, and equally, the relationships between foliar expression of the diseases and vine death is relatively unknown too.  To investigate this last question, the vines of six vineyards cv. Cabernet Sauvignon in the Bordeaux region were surveyed, by recording foliar symptoms, dead arms and dead plants from 2004 to 2010. In 2008, 2009 and 2010, approximately five percent of the asymptomatic vines died but the percentage of dead vines which had previously expressed esca foliar symptoms was higher, and varied between vineyards. A logistic regression model was used to determine the previous years of symptomatic expression associated with vine mortality. The mortality of esca is always associated with the foliar symptom expression of the year preceding vine death. One or two other earlier years of expression frequently represented additional risk factors. The Eutypa dieback symptom was also a risk factor of death, superior or equal to that of esca. The study of the internal necroses of vines expressing esca or Eutypa dieback is discussed in the light of these statistical results.

  1. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  2. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  3. Deconvolution of Voltage Sensor Time Series and Electro-diffusion Modeling Reveal the Role of Spine Geometry in Controlling Synaptic Strength.

    Science.gov (United States)

    Cartailler, Jerome; Kwon, Taekyung; Yuste, Rafael; Holcman, David

    2018-03-07

    Most synaptic excitatory connections are made on dendritic spines. But how the voltage in spines is modulated by its geometry remains unclear. To investigate the electrical properties of spines, we combine voltage imaging data with electro-diffusion modeling. We first present a temporal deconvolution procedure for the genetically encoded voltage sensor expressed in hippocampal cultured neurons and then use electro-diffusion theory to compute the electric field and the current-voltage conversion. We extract a range for the neck resistances of 〈R〉=100±35MΩ. When a significant current is injected in a spine, the neck resistance can be inversely proportional to its radius, but not to the radius square, as predicted by Ohm's law. We conclude that the postsynaptic voltage cannot only be modulated by changing the number of receptors, but also by the spine geometry. Thus, spine morphology could be a key component in determining synaptic transduction and plasticity. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  5. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  6. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  7. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  8. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    Science.gov (United States)

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  9. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  10. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  11. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  12. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  13. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Directory of Open Access Journals (Sweden)

    Ujjwal Maulik

    Full Text Available Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution. The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post

  14. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Science.gov (United States)

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

  15. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  16. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  17. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  18. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  19. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  20. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  1. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  2. ddClone: joint statistical inference of clonal populations from single cell and bulk tumour sequencing data.

    Science.gov (United States)

    Salehi, Sohrab; Steif, Adi; Roth, Andrew; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2017-03-01

    Next-generation sequencing (NGS) of bulk tumour tissue can identify constituent cell populations in cancers and measure their abundance. This requires computational deconvolution of allelic counts from somatic mutations, which may be incapable of fully resolving the underlying population structure. Single cell sequencing (SCS) is a more direct method, although its replacement of NGS is impeded by technical noise and sampling limitations. We propose ddClone, which analytically integrates NGS and SCS data, leveraging their complementary attributes through joint statistical inference. We show on real and simulated datasets that ddClone produces more accurate results than can be achieved by either method alone.

  3. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  4. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  5. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  6. The deconvolution of Doppler-broadened positron annihilation measurements using fast Fourier transforms and power spectral analysis

    International Nuclear Information System (INIS)

    Schaffer, J.P.; Shaughnessy, E.J.; Jones, P.L.

    1984-01-01

    A deconvolution procedure which corrects Doppler-broadened positron annihilation spectra for instrument resolution is described. The method employs fast Fourier transforms, is model independent, and does not require iteration. The mathematical difficulties associated with the incorrectly posed first order Fredholm integral equation are overcome by using power spectral analysis to select a limited number of low frequency Fourier coefficients. The FFT/power spectrum method is then demonstrated for an irradiated high purity single crystal sapphire sample. (orig.)

  7. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  8. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  9. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    Science.gov (United States)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  10. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  12. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  13. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    Science.gov (United States)

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  15. Novel statistical framework to identify differentially expressed genes allowing transcriptomic background differences.

    Science.gov (United States)

    Ling, Zhi-Qiang; Wang, Yi; Mukaisho, Kenichi; Hattori, Takanori; Tatsuta, Takeshi; Ge, Ming-Hua; Jin, Li; Mao, Wei-Min; Sugihara, Hiroyuki

    2010-06-01

    Tests of differentially expressed genes (DEGs) from microarray experiments are based on the null hypothesis that genes that are irrelevant to the phenotype/stimulus are expressed equally in the target and control samples. However, this strict hypothesis is not always true, as there can be several transcriptomic background differences between target and control samples, including different cell/tissue types, different cell cycle stages and different biological donors. These differences lead to increased false positives, which have little biological/medical significance. In this article, we propose a statistical framework to identify DEGs between target and control samples from expression microarray data allowing transcriptomic background differences between these samples by introducing a modified null hypothesis that the gene expression background difference is normally distributed. We use an iterative procedure to perform robust estimation of the null hypothesis and identify DEGs as outliers. We evaluated our method using our own triplicate microarray experiment, followed by validations with reverse transcription-polymerase chain reaction (RT-PCR) and on the MicroArray Quality Control dataset. The evaluations suggest that our technique (i) results in less false positive and false negative results, as measured by the degree of agreement with RT-PCR of the same samples, (ii) can be applied to different microarray platforms and results in better reproducibility as measured by the degree of DEG identification concordance both intra- and inter-platforms and (iii) can be applied efficiently with only a few microarray replicates. Based on these evaluations, we propose that this method not only identifies more reliable and biologically/medically significant DEG, but also reduces the power-cost tradeoff problem in the microarray field. Source code and binaries freely available for download at http://comonca.org.cn/fdca/resources/softwares/deg.zip.

  16. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  17. Statistical symmetries in physics

    International Nuclear Information System (INIS)

    Green, H.S.; Adelaide Univ., SA

    1994-01-01

    Every law of physics is invariant under some group of transformations and is therefore the expression of some type of symmetry. Symmetries are classified as geometrical, dynamical or statistical. At the most fundamental level, statistical symmetries are expressed in the field theories of the elementary particles. This paper traces some of the developments from the discovery of Bose statistics, one of the two fundamental symmetries of physics. A series of generalizations of Bose statistics is described. A supersymmetric generalization accommodates fermions as well as bosons, and further generalizations, including parastatistics, modular statistics and graded statistics, accommodate particles with properties such as 'colour'. A factorization of elements of ggl(n b ,n f ) can be used to define truncated boson operators. A general construction is given for q-deformed boson operators, and explicit constructions of the same type are given for various 'deformed' algebras. A summary is given of some of the applications and potential applications. 39 refs., 2 figs

  18. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  19. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    International Nuclear Information System (INIS)

    Floberg, J M; Holden, J E

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)

  20. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    Science.gov (United States)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source

  1. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  2. A new efficient statistical test for detecting variability in the gene expression data.

    Science.gov (United States)

    Mathur, Sunil; Dolo, Samuel

    2008-08-01

    DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A

  3. Evaluation of obstructive uropathy by deconvolution analysis of {sup 99m}Tc-mercaptoacetyltriglycine ({sup 99m}Tc-MAG3) renal scintigraphic data. A comparison with diuresis renography

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Yoshiyuki [Mie Univ., Tsu (Japan). School of Medicine

    1997-06-01

    Clinical significance of ERPF (effective renal plasma flow) and MTT (mean transit time) calculated by deconvolution analysis was studied in patients with obstructive uropathy. Subjects were 84 kidneys of 38 patients and 4 people without renal abnormality (22 males and 20 females) whose age was 53.8 y in a mean. Scintigraphy was done with a Toshiba {gamma}-camera GCA-7200A equipped with a low energy-high resolution collimator with the energy width of 149 keV{+-}20% at 20 min after loading of 500 ml of water and rapidly after intravenous administration of {sup 99m}Tc-MAG3 (200 MBq). At 5 min later, blood was collected and at 10 min, furosemide was intravenously given. Plasma radioactivity was measured in a well-type scintillation counter and was used for correction of blood concentration-time curve obtained from heart area data. Split MTT, regional MTT and ERPF were calculated by deconvolution analysis. Impaired transit was judged from renogram after furosemide loading and was classified into 6 types. ERPF was found lowered in cases of obstruction and in low renal function. Regional MTT was prolonged only in the former cases. The examination with the deconvolution analysis was concluded to be widely used since it gave useful information for the treatment. (K.H.)

  4. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  5. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  6. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  7. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  8. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Statistical modelling and deconvolution of yield meter data

    DEFF Research Database (Denmark)

    Tøgersen, Frede Aakmann; Waagepetersen, Rasmus Plenge

    2004-01-01

    and an impulse response function. This results in an unusual spatial covariance structure (depending on the driving pattern of the combine harverster) for the yield monitoring system data. Parameters of the impulse response function and the spatial covariance function of the yield are estimated using maximum...

  10. Statistical modelling and deconvolution of yield meter data

    DEFF Research Database (Denmark)

    Tøgersen, Frede Aakmann; Waagepetersen, Rasmus Plenge

    Data for yield maps can be obtained from modern combine harvesters equipped with a differential global positioning system and a yield monitoring system. Due to delay and smoothing effects in the combine harvester the recorded yield data for a location represents a shifted weighted average of yiel...

  11. A unified approach to deconvolution radiation spectra measured by radiochromic films

    CERN Document Server

    Stancic, V; Ljubenov, V

    2002-01-01

    A method for the evaluation of energy distribution of a radiation source on the basis of measured space distribution of deposited energy is proposed. The measured data were obtained by using radiochromic films. Mathematical modeling is defined as a Fredholm integral equation inversion problem. Negative solutions were treated as an additional condition expressed through undefined energy group boundaries, caused by virtue of the physical phenomenon of statistical uncertainty. Examples are given of the electron source and neutron radiation field.

  12. ArraySolver: an algorithm for colour-coded graphical display and Wilcoxon signed-rank statistics for comparing microarray gene expression data.

    Science.gov (United States)

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.

  13. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  14. Microarray data and gene expression statistics for Saccharomyces cerevisiae exposed to simulated asbestos mine drainage

    Directory of Open Access Journals (Sweden)

    Heather E. Driscoll

    2017-08-01

    Full Text Available Here we describe microarray expression data (raw and normalized, experimental metadata, and gene-level data with expression statistics from Saccharomyces cerevisiae exposed to simulated asbestos mine drainage from the Vermont Asbestos Group (VAG Mine on Belvidere Mountain in northern Vermont, USA. For nearly 100 years (between the late 1890s and 1993, chrysotile asbestos fibers were extracted from serpentinized ultramafic rock at the VAG Mine for use in construction and manufacturing industries. Studies have shown that water courses and streambeds nearby have become contaminated with asbestos mine tailings runoff, including elevated levels of magnesium, nickel, chromium, and arsenic, elevated pH, and chrysotile asbestos-laden mine tailings, due to leaching and gradual erosion of massive piles of mine waste covering approximately 9 km2. We exposed yeast to simulated VAG Mine tailings leachate to help gain insight on how eukaryotic cells exposed to VAG Mine drainage may respond in the mine environment. Affymetrix GeneChip® Yeast Genome 2.0 Arrays were utilized to assess gene expression after 24-h exposure to simulated VAG Mine tailings runoff. The chemistry of mine-tailings leachate, mine-tailings leachate plus yeast extract peptone dextrose media, and control yeast extract peptone dextrose media is also reported. To our knowledge this is the first dataset to assess global gene expression patterns in a eukaryotic model system simulating asbestos mine tailings runoff exposure. Raw and normalized gene expression data are accessible through the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO Database Series GSE89875 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE89875.

  15. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    Science.gov (United States)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  16. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    Science.gov (United States)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  17. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    Directory of Open Access Journals (Sweden)

    Ma Ligeng

    2003-11-01

    Full Text Available Abstract Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i upload and populate microarray data into a database; (ii integrate gene expression with enzymes of the pathways; (iii generate pathway diagrams without building image files manually; (iv visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i automatic generation of pathways associated with gene expression and (ii statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s.

  18. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics, as given by Haldane, allows for a statistical interaction between distinguishable particles (multi-species statistics). The thermodynamic quantities for such statistics ca be evaluated exactly. The explicit expressions for the cluster coefficients are presented. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models. The interesting questions of generalizing this correspondence onto the higher-dimensional and the multi-species cases remain essentially open

  19. Transplantation of epiphytic bioaccumulators (Tillandsia capillaris) for high spatial resolution biomonitoring of trace elements and point sources deconvolution in a complex mining/smelting urban context

    Science.gov (United States)

    Goix, Sylvaine; Resongles, Eléonore; Point, David; Oliva, Priscia; Duprey, Jean Louis; de la Galvez, Erika; Ugarte, Lincy; Huayta, Carlos; Prunier, Jonathan; Zouiten, Cyril; Gardon, Jacques

    2013-12-01

    Monitoring atmospheric trace elements (TE) levels and tracing their source origin is essential for exposure assessment and human health studies. Epiphytic Tillandsia capillaris plants were used as bioaccumulator of TE in a complex polymetallic mining/smelting urban context (Oruro, Bolivia). Specimens collected from a pristine reference site were transplanted at a high spatial resolution (˜1 sample/km2) throughout the urban area. About twenty-seven elements were measured after a 4-month exposure, also providing new information values for reference material BCR482. Statistical power analysis for this biomonitoring mapping approach against classical aerosols surveys performed on the same site showed the better aptitude of T. Capillaris to detect geographical trend, and to deconvolute multiple contamination sources using geostatistical principal component analysis. Transplanted specimens in the vicinity of the mining and smelting areas were characterized by extreme TE accumulation (Sn > Ag > Sb > Pb > Cd > As > W > Cu > Zn). Three contamination sources were identified: mining (Ag, Pb, Sb), smelting (As, Sn) and road traffic (Zn) emissions, confirming results of previous aerosol survey.

  20. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  1. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  2. A novel statistical algorithm for gene expression analysis helps differentiate pregnane X receptor-dependent and independent mechanisms of toxicity.

    Directory of Open Access Journals (Sweden)

    M Ann Mongan

    Full Text Available Genome-wide gene expression profiling has become standard for assessing potential liabilities as well as for elucidating mechanisms of toxicity of drug candidates under development. Analysis of microarray data is often challenging due to the lack of a statistical model that is amenable to biological variation in a small number of samples. Here we present a novel non-parametric algorithm that requires minimal assumptions about the data distribution. Our method for determining differential expression consists of two steps: 1 We apply a nominal threshold on fold change and platform p-value to designate whether a gene is differentially expressed in each treated and control sample relative to the averaged control pool, and 2 We compared the number of samples satisfying criteria in step 1 between the treated and control groups to estimate the statistical significance based on a null distribution established by sample permutations. The method captures group effect without being too sensitive to anomalies as it allows tolerance for potential non-responders in the treatment group and outliers in the control group. Performance and results of this method were compared with the Significant Analysis of Microarrays (SAM method. These two methods were applied to investigate hepatic transcriptional responses of wild-type (PXR(+/+ and pregnane X receptor-knockout (PXR(-/- mice after 96 h exposure to CMP013, an inhibitor of β-secretase (β-site of amyloid precursor protein cleaving enzyme 1 or BACE1. Our results showed that CMP013 led to transcriptional changes in hallmark PXR-regulated genes and induced a cascade of gene expression changes that explained the hepatomegaly observed only in PXR(+/+ animals. Comparison of concordant expression changes between PXR(+/+ and PXR(-/- mice also suggested a PXR-independent association between CMP013 and perturbations to cellular stress, lipid metabolism, and biliary transport.

  3. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  4. Blind deconvolution of seismograms regularized via minimum support

    International Nuclear Information System (INIS)

    Royer, A A; Bostock, M G; Haber, E

    2012-01-01

    The separation of earthquake source signature and propagation effects (the Earth’s ‘Green’s function’) that encode a seismogram is a challenging problem in seismology. The task of separating these two effects is called blind deconvolution. By considering seismograms of multiple earthquakes from similar locations recorded at a given station and that therefore share the same Green’s function, we may write a linear relation in the time domain u i (t)*s j (t) − u j (t)*s i (t) = 0, where u i (t) is the seismogram for the ith source and s j (t) is the jth unknown source. The symbol * represents the convolution operator. From two or more seismograms, we obtain a homogeneous linear system where the unknowns are the sources. This system is subject to a scaling constraint to deliver a non-trivial solution. Since source durations are not known a priori and must be determined, we augment our system by introducing the source durations as unknowns and we solve the combined system (sources and source durations) using separation of variables. Our solution is derived using direct linear inversion to recover the sources and Newton’s method to recover source durations. This method is tested using two sets of synthetic seismograms created by convolution of (i) random Gaussian source-time functions and (ii) band-limited sources with a simplified Green’s function and signal to noise levels up to 10% with encouraging results. (paper)

  5. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    Science.gov (United States)

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  6. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  7. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  8. A deconvolution method for deriving the transit time spectrum for ultrasound propagation through cancellous bone replica models.

    Science.gov (United States)

    Langton, Christian M; Wille, Marie-Luise; Flegg, Mark B

    2014-04-01

    The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland-Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.

  9. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics that was given by Haldane admits a 'statistical interaction' between distinguishable particles (multispecies statistics). For such statistics, thermodynamic quantities can be evaluated exactly; explicit expressions are presented here for cluster coefficients. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models of the Calogero-Sutherland type. The interesting questions of generalizing this correspondence to the higher-dimensional and the multispecies cases remain essentially open; however, our results provide some hints as to searches for the models in question

  10. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  11. Statistical sum of bosonic string, compactified on an orbifold

    International Nuclear Information System (INIS)

    Morozov, A.; Ol'shanetskij, M.

    1986-01-01

    Expression for statistical sum of bosonic string, compactified on a singular orbifold, is presented. All the information about the orbifold is encoded the specific combination of theta-functions, which the statistical sum is expressed through

  12. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...

  13. The deconvolution of sputter-etching surface concentration measurements to determine impurity depth profiles

    International Nuclear Information System (INIS)

    Carter, G.; Katardjiev, I.V.; Nobes, M.J.

    1989-01-01

    The quasi-linear partial differential continuity equations that describe the evolution of the depth profiles and surface concentrations of marker atoms in kinematically equivalent systems undergoing sputtering, ion collection and atomic mixing are solved using the method of characteristics. It is shown how atomic mixing probabilities can be deduced from measurements of ion collection depth profiles with increasing ion fluence, and how this information can be used to predict surface concentration evolution. Even with this information, however, it is shown that it is not possible to deconvolute directly the surface concentration measurements to provide initial depth profiles, except when only ion collection and sputtering from the surface layer alone occur. It is demonstrated further that optimal recovery of initial concentration depth profiles could be ensured if the concentration-measuring analytical probe preferentially sampled depths near and at the maximum depth of bombardment-induced perturbations. (author)

  14. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  15. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  16. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  17. LLL/DOR seismic conservatism of operating plants project. Interm report on Task II.1.3: soil-structure interaction. Deconvolution of the June 7, 1975, Ferndale Earthquake at the Humboldt Bay Power Plant

    International Nuclear Information System (INIS)

    Maslenikov, O.R.; Smith, P.D.

    1978-01-01

    The Ferndale Earthquake of June 7, 1975, provided a unique opportunity to study the accuracy of seismic soil-structure interaction methods used in the nuclear industry because, other than this event, there have been no cases of significant earthquakes for which moderate motions of nuclear plants have been recorded. Future studies are planned which will evaluate the soil-structure interaction methodology further, using increasingly complex methods as required. The first step in this task was to perform deconvolution and soil-structure interaction analyses for the effects of the Ferndale earthquake at the Humboldt Bay Power Plant site. The deconvolution analyses of bedrock motions performed are compared as well as additional studies on analytical sensitivity

  18. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  19. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    International Nuclear Information System (INIS)

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix; Voogt, Pim de

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually

  20. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    Energy Technology Data Exchange (ETDEWEB)

    Bade, Richard [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Causanilles, Ana; Emke, Erik [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Voogt, Pim de, E-mail: w.p.devoogt@uva.nl [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam (Netherlands)

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually.

  1. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  2. Suspected-target pesticide screening using gas chromatography-quadrupole time-of-flight mass spectrometry with high resolution deconvolution and retention index/mass spectrum library.

    Science.gov (United States)

    Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong

    2014-10-01

    A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues

  3. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images.

    Science.gov (United States)

    Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong

    2017-01-01

    Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a

  4. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  5. Measurement and deconvolution of detector response time for short HPM pulses: Part 1, Microwave diodes

    International Nuclear Information System (INIS)

    Bolton, P.R.

    1987-06-01

    A technique is described for measuring and deconvolving response times of microwave diode detection systems in order to generate corrected input signals typical of an infinite detection rate. The method has been applied to cases of 2.86 GHz ultra-short HPM pulse detection where pulse rise time is comparable to that of the detector; whereas, the duration of a few nanoseconds is significantly longer. Results are specified in terms of the enhancement of equivalent deconvolved input voltages for given observed voltages. The convolution integral imposes the constraint of linear detector response to input power levels. This is physically equivalent to the conservation of integrated pulse energy in the deconvolution process. The applicable dynamic range of a microwave diode is therefore limited to a smaller signal region as determined by its calibration

  6. Study of the lifetime of the TL peaks of quartz: comparison of the deconvolution using the first order kinetic with the initial rise method

    International Nuclear Information System (INIS)

    RATOVONJANAHARY, A.J.F.

    2005-01-01

    Quartz is a thermoluminescent material which can be used for dating and/or for dosimetry. This material has been used since 60s for dating samples like pottery, flint, etc., but the method is still subject to some improvement. One of the problem of thermoluminescence dating is the estimation of the lifetime of the ''used peak'' . The application of the glow-curve deconvolution (GCD) technique for the analysis of a composite thermoluminescence glow curve into its individual glow peaks has been applied widely since the 80s. Many functions describing a single glow peak have been proposed. For analysing quartz behaviour, thermoluminescence glow-curve deconvolution (GCD) functions are compared for first order of kinetic. The free parameters of the GCD functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be obtained experimentally. The activation energy (E) is the additional free parameter. The lifetime (τ) of each glow peak, which is an important factor for dating, is calculated from these three parameters. For ''used'' ''peak'' lifetime analysis, GCD results are compared to those from initial rise method (IRM). Results vary fairly from method to method. [fr

  7. Deconvolution analysis of 24-h serum cortisol profiles informs the amount and distribution of hydrocortisone replacement therapy.

    Science.gov (United States)

    Peters, Catherine J; Hill, Nathan; Dattani, Mehul T; Charmandari, Evangelia; Matthews, David R; Hindmarsh, Peter C

    2013-03-01

    Hydrocortisone therapy is based on a dosing regimen derived from estimates of cortisol secretion, but little is known of how the dose should be distributed throughout the 24 h. We have used deconvolution analysis of 24-h serum cortisol profiles to determine 24-h cortisol secretion and distribution to inform hydrocortisone dosing schedules in young children and older adults. Twenty four hour serum cortisol profiles from 80 adults (41 men, aged 60-74 years) and 29 children (24 boys, aged 5-9 years) were subject to deconvolution analysis using an 80-min half-life to ascertain total cortisol secretion and distribution throughout the 24-h period. Mean daily cortisol secretion was similar between adults (6.3 mg/m(2) body surface area/day, range 5.1-9.3) and children (8.0 mg/m(2) body surface area/day, range 5.3-12.0). Peak serum cortisol concentration was higher in children compared with adults, whereas nadir serum cortisol concentrations were similar. Timing of the peak serum cortisol concentration was similar (07.05-07.25), whereas that of the nadir concentration occurred later in adults (midnight) compared with children (22.48) (P = 0.003). Children had the highest percentage of cortisol secretion between 06.00 and 12.00 (38.4%), whereas in adults this took place between midnight and 06.00 (45.2%). These observations suggest that the daily hydrocortisone replacement dose should be equivalent on average to 6.3 mg/m(2) body surface area/day in adults and 8.0 mg/m(2) body surface area/day in children. Differences in distribution of the total daily dose between older adults and young children need to be taken into account when using a three or four times per day dosing regimen. © 2012 Blackwell Publishing Ltd.

  8. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  9. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  10. Quantum statistical theory of solid plasma (Com.1)

    International Nuclear Information System (INIS)

    Kim Yon Il

    1986-01-01

    In order to obtain the Hamiltonian of the electron system in solid plasma, the self-consistent electromagnetic field formed by the electron system is quantalized. In this process the longitudinal vector potential is introduced through the relation. The obtained Hamiltonian is expressed by the collective coordinate, consistent with D. Pines' result. Various quantum statistical expressions, the dispersion relation and sum rules of the transverse dielectric function are derived using the fact that the collectived cooredinates are connected with the electromagnetic field in the method in this paper. In additon, various quantum statistical expressions for the longitudinal dielectric function convenient for practical calculations are obtained besides the Nozieres-Pines' expression. (author)

  11. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    Energy Technology Data Exchange (ETDEWEB)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com [Department of Physics, University of Pune, Ganeshkhind, Pune -411007, India and Department of Physics, Fergusson College, Pune- 411004 (India); Patil, B. J.; Bhoraskar, V. N.; Dhole, S. D. [Department of Physics, University of Pune, Ganeshkhind, Pune -411007 (India); Sahare, P. D. [Department of Physics and Astrophysics, University of Delhi, Delhi- 110007 (India)

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  12. Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Directory of Open Access Journals (Sweden)

    Turk Rolf

    2006-04-01

    Full Text Available Abstract Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it

  13. Cusp and W peak analysis in electron capture to the continuum of bare H and He projectiles from hydrocarbon and fluorocarbon gases

    Energy Technology Data Exchange (ETDEWEB)

    Joyce, J.M.; Bissinger, G.

    1987-04-01

    The ECC cusp and W peak shapes for continuum electron capture by approx. = MeV/u H/sup +/ and He/sup 2 +/ from hydrocarbon and fluorocarbon gas molecules are analyzed with the general parametric expression of Meckbach, Nemirovsky and Garibotti (i) to look for trends in the coefficients of these parameters, (ii) as a way of generating computed cusp shapes to reduce statistical fluctuations in cusp difference spectra, and (iii) to provide information on the deconvoluted d/sup 2/sigma/d..nu.. dtheta values for cusp and W peaks in the hydrocarbon gases.

  14. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    International Nuclear Information System (INIS)

    Jimenez-Ruiz, A.; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R.

    2017-01-01

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  15. INTRAVAL project phase 2. Analysis of STRIPA 3D data by a deconvolution technique

    International Nuclear Information System (INIS)

    Ilvonen, M.; Hautojaervi, A.; Paatero, P.

    1994-09-01

    The data analysed in this report were obtained in tracer experiments performed from a specially excavated drift in good granite rock at the level of 360 m below the ground in the Stripa mine. Tracer transport paths from the injection points to the collecting sheets at the tunnel walls were tens of meters long. Data for six tracers that arrived in measurable concentrations were elaborated by different means of data analysis to reveal the transport behaviour of solutes in the rock fractures. Techniques like direct inversion of the data, Fourier analysis, Singular Value Decomposition (SVD) and non-negative least squares fitting (NNLS) were employed. A newly developed code based on a general-purpose approach for solving deconvolution-type or integral equation problems, Extreme Value Estimation (EVE), proved to be a very helpful tool in deconvolving impulse responses from the injection flow rates and break-through curves of tracers and assessing the physical confidence of the results. (23 refs., 33 figs.)

  16. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez-Ruiz, A., E-mail: ailjimrui@alum.us.es; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R., E-mail: pradogotor@us.es [University of Seville, The Department of Physical Chemistry (Spain)

    2017-01-15

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  17. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Födisch, P., E-mail: p.foedisch@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Wohsmann, J. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Lange, B. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Schönherr, J. [Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Enghardt, W. [OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, PF 41, 01307 Dresden (Germany); Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology, Bautzner Landstr. 400, 01328 Dresden (Germany); German Cancer Consortium (DKTK) and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Kaever, P. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany)

    2016-09-11

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  18. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, Miori, E-mail: miori@mx6.et.tiki.ne.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Tsuji, Yoshihisa, E-mail: y.tsuji@extra.ocn.ne.jp [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Iwasaki, Toshiroh [Department of Veterinary Internal Medicine, Tokyo University of Agriculture and Technology, Saiwai-cho, 3-5-8, Fuchu 183-8509 (Japan); Miyake, Yoh-Ichi [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan); Yazumi, Shujiro [Digestive Disease Center, Kitano Hospital, 2-4-20 Ougi-machi, Kita-ku, Osaka 530-8480 (Japan); Chiba, Tsutomu [Department of Gastroenterology and Hepatology, Kyoto University Graduate School of Medicine, Shogoinkawara-cho 54, Sakyo-ku 606-8507 (Japan); Yamada, Kazutaka, E-mail: kyamada@obihiro.ac.jp [Department of Clinical Veterinary Science, Obihiro University of Agriculture and Veterinary Medicine, Nishi 2-11 Inada-cho, Obihiro 080-8555 (Japan)

    2011-01-15

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg{sup -1}) at 5.0 ml s{sup -1}. The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 {+-} 1.6) was shorter than the appearance time (14.1 {+-} 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  19. Measurement of canine pancreatic perfusion using dynamic computed tomography: Influence of input-output vessels on deconvolution and maximum slope methods

    International Nuclear Information System (INIS)

    Kishimoto, Miori; Tsuji, Yoshihisa; Katabami, Nana; Shimizu, Junichiro; Lee, Ki-Ja; Iwasaki, Toshiroh; Miyake, Yoh-Ichi; Yazumi, Shujiro; Chiba, Tsutomu; Yamada, Kazutaka

    2011-01-01

    Objective: We investigated whether the prerequisite of the maximum slope and deconvolution methods are satisfied in pancreatic perfusion CT and whether the measured parameters between these algorithms are correlated. Methods: We examined nine beagles injected with iohexol (200 mgI kg -1 ) at 5.0 ml s -1 . The abdominal aorta and splenic and celiac arteries were selected as the input arteries and the splenic vein, the output veins. For the maximum slope method, we determined the arterial contrast volume of each artery by measuring the area under the curve (AUC) and compared the peak enhancement time in the pancreas with the contrast appearance time in the splenic vein. For the deconvolution method, the artery-to-vein collection rate of contrast medium was calculated. We calculated the pancreatic tissue blood flow (TBF), tissue blood volume (TBV), and mean transit time (MTT) using both algorithms and investigated their correlation based on vessel selection. Results: The artery AUC significantly decreased as it neared the pancreas (P < 0.01). In all cases, the peak time of the pancreas (11.5 ± 1.6) was shorter than the appearance time (14.1 ± 1.6) in the splenic vein. The splenic artery-vein combination exhibited the highest collection rate (91.1%) and was the only combination that was significantly correlated between TBF, TBV, and MTT in both algorithms. Conclusion: Selection of a vessel nearest to the pancreas is considered as a more appropriate prerequisite. Therefore, vessel selection is important in comparison of the semi-quantitative parameters obtained by different algorithms.

  20. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Kuo Men

    2017-12-01

    Full Text Available BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC. It requires exact delineation of the nasopharynx gross tumor volume (GTVnx, the metastatic lymph node gross tumor volume (GTVnd, the clinical target volume (CTV, and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy

  1. Application of stable isotopes and isotope pattern deconvolution-ICPMS to speciation of endogenous and exogenous Fe and Se in rats

    International Nuclear Information System (INIS)

    Gonzalez Iglesias, H.; Fernandez-Sanchez, M.L.; Garcia Alonso, J.I.; Lopez Sastre, J.B.; Sanz-Medel, A.

    2009-01-01

    Full text: Enriched stable isotopes are crucial to study essential trace element metabolism (e.g. Se, Fe) in biological systems. Measuring isotope ratios by ICPMS and using appropriate mathematical calculations, based on isotope pattern deconvolution (IPD) may provide quantitative data about endogenous and exogenous essential or toxic elements and their metabolism. In this work, IPD was applied to explore the feasibility of using two Se (or Fe) enriched stable isotopes, one as metabolic tracer and the other as quantitation tracer, to discriminate between the endogenous and supplemented Se (or Fe) species in rat fluids by collision cell ICPMS coupled to HPLC separation. (author)

  2. A course in statistics with R

    CERN Document Server

    Tattar, Prabhanjan N; Manjunath, B G

    2016-01-01

    Integrates the theory and applications of statistics using R A Course in Statistics with R has been written to bridge the gap between theory and applications and explain how mathematical expressions are converted into R programs. The book has been primarily designed as a useful companion for a Masters student during each semester of the course, but will also help applied statisticians in revisiting the underpinnings of the subject. With this dual goal in mind, the book begins with R basics and quickly covers visualization and exploratory analysis. Probability and statistical inference, inclusive of classical, nonparametric, and Bayesian schools, is developed with definitions, motivations, mathematical expression and R programs in a way which will help the reader to understand the mathematical development as well as R implementation. Linear regression models, experimental designs, multivariate analysis, and categorical data analysis are treated in a way which makes effective use of visualization techniques and...

  3. A novel deconvolution method for modeling UDP-N-acetyl-D-glucosamine biosynthetic pathways based on 13C mass isotopologue profiles under non-steady-state conditions

    Directory of Open Access Journals (Sweden)

    Belshoff Alex C

    2011-05-01

    against experimental data. The reproducibility and robustness of the deconvolution were verified by replicate experiments, extensive statistical analyses, and cross-validation against NMR data. Conclusions This computational approach revealed the relative fluxes through the different biosynthetic pathways of UDP-GlcNAc, which comprises simultaneous sequential and parallel reactions, providing new insight into the regulation of UDP-GlcNAc levels and O-linked protein glycosylation. This is the first such analysis of UDP-GlcNAc dynamics, and the approach is generally applicable to other complex metabolites comprising distinct metabolic subunits, where sufficient numbers of isotopologues can be unambiguously resolved and accurately measured.

  4. Self-Organization of Genome Expression from Embryo to Terminal Cell Fate: Single-Cell Statistical Mechanics of Biological Regulation

    Directory of Open Access Journals (Sweden)

    Alessandro Giuliani

    2017-12-01

    Full Text Available A statistical mechanical mean-field approach to the temporal development of biological regulation provides a phenomenological, but basic description of the dynamical behavior of genome expression in terms of autonomous self-organization with a critical transition (Self-Organized Criticality: SOC. This approach reveals the basis of self-regulation/organization of genome expression, where the extreme complexity of living matter precludes any strict mechanistic approach. The self-organization in SOC involves two critical behaviors: scaling-divergent behavior (genome avalanche and sandpile-type critical behavior. Genome avalanche patterns—competition between order (scaling and disorder (divergence reflect the opposite sequence of events characterizing the self-organization process in embryo development and helper T17 terminal cell differentiation, respectively. On the other hand, the temporal development of sandpile-type criticality (the degree of SOC control in mouse embryo suggests the existence of an SOC control landscape with a critical transition state (i.e., the erasure of zygote-state criticality. This indicates that a phase transition of the mouse genome before and after reprogramming (immediately after the late 2-cell state occurs through a dynamical change in a control parameter. This result provides a quantitative open-thermodynamic appreciation of the still largely qualitative notion of the epigenetic landscape. Our results suggest: (i the existence of coherent waves of condensation/de-condensation in chromatin, which are transmitted across regions of different gene-expression levels along the genome; and (ii essentially the same critical dynamics we observed for cell-differentiation processes exist in overall RNA expression during embryo development, which is particularly relevant because it gives further proof of SOC control of overall expression as a universal feature.

  5. ArraySolver: An Algorithm for Colour-Coded Graphical Display and Wilcoxon Signed-Rank Statistics for Comparing Microarray Gene Expression Data

    OpenAIRE

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for tra...

  6. Fermi-Dirac statistics and the number theory

    OpenAIRE

    Kubasiak, A.; Korbicz, J.; Zakrzewski, J.; Lewenstein, M.

    2005-01-01

    We relate the Fermi-Dirac statistics of an ideal Fermi gas in a harmonic trap to partitions of given integers into distinct parts, studied in number theory. Using methods of quantum statistical physics we derive analytic expressions for cumulants of the probability distribution of the number of different partitions.

  7. A simple method for the deconvolution of 134 Cs/137 Cs peaks in gamma-ray scintillation spectrometry

    International Nuclear Information System (INIS)

    Darko, E.O.; Osae, E.K.; Schandorf, C.

    1998-01-01

    A simple method for the deconvolution of 134 Cs / 137 Cs peaks in a given mixture of 134 Cs and 137 Cs using Nal(TI) gamma-ray scintillation spectrometry is described. In this method the 795 keV energy of 134 Cs is used as a reference peak to calculate the activity of the 137 Cs directly from the measured peaks. Certified reference materials were measured using the method and compared with a high resolution gamma-ray spectrometry measurements. The results showed good agreement with the certified values. The method is very simple and does not need any complicated mathematics and computer programme to de- convolute the overlapping 604.7 keV and 661.6 keV peaks of 134 Cs and 137 Cs respectively. (author). 14 refs.; 1 tab., 2 figs

  8. Tracking juniper berry content in oils and distillates by spectral deconvolution of gas chromatography/mass spectrometry data.

    Science.gov (United States)

    Robbat, Albert; Kowalsick, Amanda; Howell, Jessalin

    2011-08-12

    The complex nature of botanicals and essential oils makes it difficult to identify all of the constituents by gas chromatography/mass spectrometry (GC/MS) alone. In this paper, automated sequential, multidimensional gas chromatography/mass spectrometry (GC-GC/MS) was used to obtain a matrix-specific, retention time/mass spectrometry library of 190 juniper berry oil compounds. GC/MS analysis on stationary phases with different polarities confirmed the identities of each compound when spectral deconvolution software was used to analyze the oil. Also analyzed were distillates of juniper berry and its oil as well as gin from four different manufacturers. Findings showed the chemical content of juniper berry can be traced from starting material to final product and can be used to authenticate and differentiate brands. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Coherent states for oscillators of non-conventional statistics

    International Nuclear Information System (INIS)

    Dao Vong Duc; Nguyen Ba An

    1998-12-01

    In this work we consider systematically the concept of coherent states for oscillators of non-conventional statistics - parabose oscillator, infinite statistics oscillator and generalised q-deformed oscillator. The expressions for the quadrature variances and particle number distribution are derived and displayed graphically. The obtained results show drastic changes when going from one statistics to another. (author)

  10. Analysis of Photosystem I Donor and Acceptor Sides with a New Type of Online-Deconvoluting Kinetic LED-Array Spectrophotometer.

    Science.gov (United States)

    Schreiber, Ulrich; Klughammer, Christof

    2016-07-01

    The newly developed Dual/KLAS-NIR spectrophotometer, technical details of which were reported very recently, is used in measuring redox changes of P700, plastocyanin (PC) and ferredoxin (Fd) in intact leaves of Hedera helix, Taxus baccata and Brassica napus An overview of various light-/dark-induced changes of deconvoluted P700 + , PC + and Fd - signals is presented demonstrating the wealth of novel information and the consistency of the obtained results. Fd - changes are particularly large after dark adaptation. PC oxidation precedes P700 oxidation during dark-light induction and in steady-state light response curves. Fd reoxidation during induction correlates with the secondary decline of simultaneously measured fluorescence yield, both of which are eliminated by removal of O 2 By determination of 100% redox changes, relative contents of PC/P700 and Fd/P700 can be assessed, which show considerable variations between different leaves, with a trend to higher values in sun leaves. Based on deconvoluted P700 + signals, the complementary quantum yields of PSI, Y(I) (photochemical energy use), Y(ND) (non-photochemical loss due to oxidized primary donor) and Y(NA) (non-photochemical loss due to reduced acceptor) are determined as a function of light intensity and compared with the corresponding complementary quantum yields of PSII, Y(II) (photochemical energy use), Y(NPQ) (regulated non-photochemical loss) and Y(NO) (non-regulated non-photochemical loss). The ratio Y(I)/Y(II) increases with increasing intensities. In the low intensity range, a two-step increase of PC + is indicative of heterogeneous PC pools. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  12. Statistical modelling of transcript profiles of differentially regulated genes

    Directory of Open Access Journals (Sweden)

    Sergeant Martin J

    2008-07-01

    Full Text Available Abstract Background The vast quantities of gene expression profiling data produced in microarray studies, and the more precise quantitative PCR, are often not statistically analysed to their full potential. Previous studies have summarised gene expression profiles using simple descriptive statistics, basic analysis of variance (ANOVA and the clustering of genes based on simple models fitted to their expression profiles over time. We report the novel application of statistical non-linear regression modelling techniques to describe the shapes of expression profiles for the fungus Agaricus bisporus, quantified by PCR, and for E. coli and Rattus norvegicus, using microarray technology. The use of parametric non-linear regression models provides a more precise description of expression profiles, reducing the "noise" of the raw data to produce a clear "signal" given by the fitted curve, and describing each profile with a small number of biologically interpretable parameters. This approach then allows the direct comparison and clustering of the shapes of response patterns between genes and potentially enables a greater exploration and interpretation of the biological processes driving gene expression. Results Quantitative reverse transcriptase PCR-derived time-course data of genes were modelled. "Split-line" or "broken-stick" regression identified the initial time of gene up-regulation, enabling the classification of genes into those with primary and secondary responses. Five-day profiles were modelled using the biologically-oriented, critical exponential curve, y(t = A + (B + CtRt + ε. This non-linear regression approach allowed the expression patterns for different genes to be compared in terms of curve shape, time of maximal transcript level and the decline and asymptotic response levels. Three distinct regulatory patterns were identified for the five genes studied. Applying the regression modelling approach to microarray-derived time course data

  13. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  14. Deconvolution of the tree ring based delta13C record

    International Nuclear Information System (INIS)

    Peng, T.; Broecker, W.S.; Freyer, H.D.; Trumbore, S.

    1983-01-01

    We assumed that the tree-ring based 13 C/ 12 C record constructed by Freyer and Belacy (1983) to be representative of the fossil fuel and forest-soil induced 13 C/ 12 C change for atmospheric CO 2 . Through the use of a modification of the Oeschger et al. ocean model, we have computed the contribution of the combustion of coal, oil, and natural gas to this observed 13 C/ 12 C change. A large residual remains when the tree-ring-based record is corrected for the contribution of fossil fuel CO 2 . A deconvolution was performed on this residual to determine the time history and magnitude of the forest-soil reservoir changes over the past 150 years. Several important conclusions were reached. (1) The magnitude of the integrated CO 2 input from these sources was about 1.6 times that from fossil fuels. (2) The forest-soil contribution reached a broad maximum centered at about 1900. (3) Over the 2 decade period covered by the Mauna Loa atmospheric CO 2 content record, the input from forests and soils was about 30% that from fossil fuels. (4) The 13 C/ 12 C trend over the last 20 years was dominated by the input of fossil fuel CO 2 . (5) The forest-soil release did not contribute significantly to the secular increase in atmospheric CO 2 observed over the last 20 years. (6) The pre-1850 atmospheric p2 values must have been in the range 245 to 270 x 10 -6 atmospheres

  15. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    Science.gov (United States)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  16. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    International Nuclear Information System (INIS)

    Geloni, G.; Saldin, E.L.; Schneidmiller, E.A.; Yurkov, M.V.

    2004-01-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function

  17. OSL Age Determination of the Hearths in a Bronze Age Dwelling Site by using Bayesian Statistics

    International Nuclear Information System (INIS)

    Kim, Myung Jin; Yang, Hye Jin; Hong, Duk Geun

    2011-01-01

    OSL dating for three hearths having the sequence of use and discard in No. 29 and 29-1 dwelling sites at Sogol cultural site was carried out. Resulting from the deconvolution of natural CW-OSL decay curve and thermal zeroing test, it was turned out that OSL signal was entirely composed of the heat- and light-sensitive fast component with high photoionization cross-section and all quartz OSL signals were thermally bleached under 300 .deg. C which is the minimum temperature related to heating and cooking in Bronze age. After dose recovery test and plateau test, paleodose of each hearth sample was evaluated by using SAR method, and OSL age was determined from the ratio of paleodose to annual dose rate. For the purpose of the precision improvement of OSL age, Bayesian statistics was applied to each hearth's age and the archaeological sequence information. Finally, it could be concluded to the accurate use period of each hearth from the resultant OSL ages

  18. OSL Age Determination of the Hearths in a Bronze Age Dwelling Site by using Bayesian Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myung Jin [Neosiskorea Co. Ltd., Seoul (Korea, Republic of); Yang, Hye Jin [Baekje Cultural Properties Research Institute, Gongju (Korea, Republic of); Hong, Duk Geun [Kangwon National University, Chuncheon (Korea, Republic of)

    2011-06-15

    OSL dating for three hearths having the sequence of use and discard in No. 29 and 29-1 dwelling sites at Sogol cultural site was carried out. Resulting from the deconvolution of natural CW-OSL decay curve and thermal zeroing test, it was turned out that OSL signal was entirely composed of the heat- and light-sensitive fast component with high photoionization cross-section and all quartz OSL signals were thermally bleached under 300 .deg. C which is the minimum temperature related to heating and cooking in Bronze age. After dose recovery test and plateau test, paleodose of each hearth sample was evaluated by using SAR method, and OSL age was determined from the ratio of paleodose to annual dose rate. For the purpose of the precision improvement of OSL age, Bayesian statistics was applied to each hearth's age and the archaeological sequence information. Finally, it could be concluded to the accurate use period of each hearth from the resultant OSL ages.

  19. Evaluation of observables in statistical multifragmentation theories

    International Nuclear Information System (INIS)

    Cole, A.J.

    1989-01-01

    The canonical formulation of equilibrium statistical multifragmentation is examined. It is shown that the explicit construction of observables (average values) by sampling the partition probabilities is unnecessary insofar as closed expressions in the form of recursion relations can be obtained quite easily. Such expressions may conversely be used to verify the sampling algorithms

  20. Correspondence regarding Zhong et al., BMC Bioinformatics 2013 Mar 7;14:89.

    Science.gov (United States)

    Kuhn, Alexandre

    2014-11-28

    Computational expression deconvolution aims to estimate the contribution of individual cell populations to expression profiles measured in samples of heterogeneous composition. Zhong et al. recently proposed Digital Sorting Algorithm (BMC Bioinformatics 2013 Mar 7;14:89) and showed that they could accurately estimate population-specific expression levels and expression differences between two populations. They compared DSA with Population-Specific Expression Analysis (PSEA), a previous deconvolution method that we developed to detect expression changes occurring within the same population between two conditions (e.g. disease versus non-disease). However, Zhong et al. compared PSEA-derived specific expression levels across different cell populations. Specific expression levels obtained with PSEA cannot be directly compared across different populations as they are on a relative scale. They are accurate as we demonstrate by deconvolving the same dataset used by Zhong et al. and, importantly, allow for comparison of population-specific expression across conditions.

  1. The reactions of neutral iron clusters with D2O: Deconvolution of equilibrium constants from multiphoton processes

    International Nuclear Information System (INIS)

    Weiller, B.H.; Bechthold, P.S.; Parks, E.K.; Pobo, L.G.; Riley, S.J.

    1989-01-01

    The chemical reactions of neutral iron clusters with D 2 O are studied in a continuous flow tube reactor by molecular beam sampling and time-of-flight mass spectrometry with laser photoionization. Product distributions are invariant to a four-fold change in reaction time demonstrating that equilibrium is attained between free and adsorbed D 2 O. The observed negative temperature dependence is consistent with an exothermic, molecular addition reaction at equilibrium. Under our experimental conditions, there is significant photodesorption of D 2 O (Fe/sub n/(D 2 O)/sub m/ + hν → Fe/sub n/ + m D 2 O) along with ionization due to absorption of multiple photons from the ionizing laser. Using a simple model based on a rate equation analysis, we are able to quantitatively deconvolute this desorption process from the equilibrium constants. 8 refs., 1 fig

  2. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    Science.gov (United States)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  3. The mathematics of a successful deconvolution: a quantitative assessment of mixture-based combinatorial libraries screened against two formylpeptide receptors.

    Science.gov (United States)

    Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia

    2013-05-30

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  4. Use of new spectral analysis methods in gamma spectra deconvolution

    International Nuclear Information System (INIS)

    Pinault, J.L.

    1991-01-01

    A general deconvolution method applicable to X and gamma ray spectrometry is proposed. Using new spectral analysis methods, it is applied to an actual case: the accurate on-line analysis of three elements (Ca, Si, Fe) in a cement plant using neutron capture gamma rays. Neutrons are provided by a low activity (5 μg) 252 Cf source; the detector is a BGO 3 in.x8 in. scintillator. The principle of the methods rests on the Fourier transform of the spectrum. The search for peaks and determination of peak areas are worked out in the Fourier representation, which enables separation of background and peaks and very efficiently discriminates peaks, or elements represented by several peaks. First the spectrum is transformed so that in the new representation the full width at half maximum (FWHM) is independent of energy. Thus, the spectrum is arranged symmetrically and transformed into the Fourier representation. The latter is multiplied by a function in order to transform original Gaussian into Lorentzian peaks. An autoregressive filter is calculated, leading to a characteristic polynomial whose complex roots represent both the location and the width of each peak, provided that the absolute value is lower than unit. The amplitude of each component (the area of each peak or the sum of areas of peaks characterizing an element) is fitted by the weighted least squares method, taking into account that errors in spectra are independent and follow a Poisson law. Very accurate results are obtained, which would be hard to achieve by other methods. The DECO FORTRAN code has been developed for compatible PC microcomputers. Some features of the code are given. (orig.)

  5. Deconvolution of Thermal Emissivity Spectra of Mercury to their Endmember Counterparts measured in Simulated Mercury Surface Conditions

    Science.gov (United States)

    Varatharajan, I.; D'Amore, M.; Maturilli, A.; Helbert, J.; Hiesinger, H.

    2017-12-01

    The Mercury Radiometer and Thermal Imaging Spectrometer (MERTIS) payload of ESA/JAXA Bepicolombo mission to Mercury will map the thermal emissivity at wavelength range of 7-14 μm and spatial resolution of 500 m/pixel [1]. Mercury was also imaged at the same wavelength range using the Boston University's Mid-Infrared Spectrometer and Imager (MIRSI) mounted on the NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii with the minimum spatial coverage of 400-600km/spectra which blends all rocks, minerals, and soil types [2]. Therefore, the study [2] used quantitative deconvolution algorithm developed by [3] for spectral unmixing of this composite thermal emissivity spectrum from telescope to their respective areal fractions of endmember spectra; however, the thermal emissivity of endmembers used in [2] is the inverted reflectance measurements (Kirchhoff's law) of various samples measured at room temperature and pressure. Over a decade, the Planetary Spectroscopy Laboratory (PSL) at the Institute of Planetary Research (PF) at the German Aerospace Center (DLR) facilitates the thermal emissivity measurements under controlled and simulated surface conditions of Mercury by taking emissivity measurements at varying temperatures from 100-500°C under vacuum conditions supporting MERTIS payload. The measured thermal emissivity endmember spectral library therefore includes major silicates such as bytownite, anorthoclase, synthetic glass, olivine, enstatite, nepheline basanite, rocks like komatiite, tektite, Johnson Space Center lunar simulant (1A), and synthetic powdered sulfides which includes MgS, FeS, CaS, CrS, TiS, NaS, and MnS. Using such specialized endmember spectral library created under Mercury's conditions significantly increases the accuracy of the deconvolution model results. In this study, we revisited the available telescope spectra and redeveloped the algorithm by [3] by only choosing the endmember spectral library created at PSL for unbiased model

  6. Fade statistics of M-turbulent optical links

    DEFF Research Database (Denmark)

    Jurado-Navas, Antonio; Maria Garrido-Balsells, Jose; Castillo-Vazquez, Miguel

    2017-01-01

    A new and generalized statistical model, called Malaga or simply M distribution, has been derived recently to characterize the irradiance fluctuations of an unbounded optical wavefront propagating through a turbulent medium under all irradiance fluctuation conditions. The aforementioned model...... extends and unifies in a simple analytical closed-form expression most of the proposed statistical models for free-space optical (FSO) communications widely employed until now in the scientific literature. Based on that M model, we have studied some important features associated to its fade statistics...

  7. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  8. Analysis of the deconvolution of the thermoluminescent curve of the zirconium oxide doped with graphite

    International Nuclear Information System (INIS)

    Salas C, P.; Estrada G, R.; Gonzalez M, P.R.; Mendoza A, D.

    2003-01-01

    In this work, we present a mathematical analysis of the behavior of the thermoluminescent curve (Tl) induced by gamma radiation in samples made of zirconium oxide doped with different amounts of graphite. In accordance with the results gamma radiation induces a Tl curve with two maximum of emission localized in the temperatures at 139 and 250 C, the area under the curve is increasing as a function of the time of exposition to the radiation. The analysis of curve deconvolution, in accordance with the theory which indicates that this behavior must be obey a Boltzmann distribution, we found that each one of them has a different growth velocity as the time of exposition increase. In the same way, we observed that after the irradiation was suspended each one of the maximum decrease with different velocity. The behaviour observed in the samples is very interesting because the zirconium oxide has attracted the interest of many research groups, this material has demonstrated to have many applications in thermoluminescent dosimetry and it can be used in the quantification of radiation. (Author)

  9. VEGF expression in hepatectomized tumor-bearing mice.

    Science.gov (United States)

    Andrini, L; Blanco, A Fernandez; Inda, A; García, M; Garcia, A; Errecalde, A

    2011-01-01

    The experiments were designed in order to study the VEGF expression in intact (group I), hepatectomized (group II), and hepatectomized-tumor bearing mice (group III) throughout one complete circadian time span. Adult male mice were used for the VEGF expression study. The statistical analysis was performed using analysis of variance (ANOVA). The results showed statistical differences in the VEGF expression between groups I and II, but the most significant differences were found between groups I and III. In conclusion, these expressions have a circadian rhythm in all groups; moreover, in group III, this expression was higher and appeared before than in the others.

  10. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    International Nuclear Information System (INIS)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.; Hamburg Univ.

    1981-01-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and 47 Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. 47 Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of 47 Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P 47 Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations. (orig.)

  11. Statistical equilibrium equations for trace elements in stellar atmospheres

    OpenAIRE

    Kubat, Jiri

    2010-01-01

    The conditions of thermodynamic equilibrium, local thermodynamic equilibrium, and statistical equilibrium are discussed in detail. The equations of statistical equilibrium and the supplementary equations are shown together with the expressions for radiative and collisional rates with the emphasize on the solution for trace elements.

  12. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    International Nuclear Information System (INIS)

    Greenberg, M.; Ebel, D.S.

    2009-01-01

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of ∼15 (micro)m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 (micro)m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  13. Statistical evaluation of SAGE libraries: consequences for experimental design

    NARCIS (Netherlands)

    Ruijter, Jan M.; van Kampen, Antoine H. C.; Baas, Frank

    2002-01-01

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags

  14. Statistical time lags in ac discharges

    International Nuclear Information System (INIS)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M; Manders, F

    2011-01-01

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms -1 . The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  15. Statistical time lags in ac discharges

    Energy Technology Data Exchange (ETDEWEB)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M [Eindhoven University of Technology, Department of Applied Physics, Postbus 513, 5600MB Eindhoven (Netherlands); Manders, F, E-mail: a.sobota@tue.nl [Philips Lighting, LightLabs, Mathildelaan 1, 5600JM Eindhoven (Netherlands)

    2011-04-06

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms{sup -1}. The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  16. Anisotropic strain in YBa2Cu3O7-δ films analysed by deconvolution of two-dimensional intensity data

    International Nuclear Information System (INIS)

    Broetz, J.; Fuess, H.

    2001-01-01

    The influence of the instrumental resolution on two-dimensional reflection profiles of epitaxic YBa 2 Cu 3 O 7-δ films on SrTiO 3 (001) has been studied in order to investigate the strain in the superconducting films. The X-ray diffraction intensity data were obtained by two-dimensional scans in reciprocal space (q-scan). Since the reflection broadening caused by the apparatus differs for each position in reciprocal space, a highly crystalline substrate was used as a standard. Thus it was possible to measure a standard very close to the YBa 2 Cu 3 O 7-δ reflections in reciprocal space. The two-dimensional deconvolution of reflections by a new computer program revealed an anisotropic strain of the two twinning systems of the film. (orig.)

  17. A comprehensive platform for highly multiplexed mammalian functional genetic screens

    Directory of Open Access Journals (Sweden)

    Cheung-Ong Kahlin

    2011-05-01

    Full Text Available Abstract Background Genome-wide screening in human and mouse cells using RNA interference and open reading frame over-expression libraries is rapidly becoming a viable experimental approach for many research labs. There are a variety of gene expression modulation libraries commercially available, however, detailed and validated protocols as well as the reagents necessary for deconvolving genome-scale gene screens using these libraries are lacking. As a solution, we designed a comprehensive platform for highly multiplexed functional genetic screens in human, mouse and yeast cells using popular, commercially available gene modulation libraries. The Gene Modulation Array Platform (GMAP is a single microarray-based detection solution for deconvolution of loss and gain-of-function pooled screens. Results Experiments with specially constructed lentiviral-based plasmid pools containing ~78,000 shRNAs demonstrated that the GMAP is capable of deconvolving genome-wide shRNA "dropout" screens. Further experiments with a larger, ~90,000 shRNA pool demonstrate that equivalent results are obtained from plasmid pools and from genomic DNA derived from lentivirus infected cells. Parallel testing of large shRNA pools using GMAP and next-generation sequencing methods revealed that the two methods provide valid and complementary approaches to deconvolution of genome-wide shRNA screens. Additional experiments demonstrated that GMAP is equivalent to similar microarray-based products when used for deconvolution of open reading frame over-expression screens. Conclusion Herein, we demonstrate four major applications for the GMAP resource, including deconvolution of pooled RNAi screens in cells with at least 90,000 distinct shRNAs. We also provide detailed methodologies for pooled shRNA screen readout using GMAP and compare next-generation sequencing to GMAP (i.e. microarray based deconvolution methods.

  18. Statistical prediction of Late Miocene climate

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A; Gupta, S.M.

    by making certain simplifying assumptions; for example in modelling ocean 4 currents, the geostrophic approximation is made. In case of statistical prediction no such a priori assumption need be made. statistical prediction comprises of using observed data... the number of equations. In this case the equations are overdetermined, and therefore one has to look for a solution that best fits the sample data in a least squares sense. To this end we express the sample .... (2.1)+ ry = y + data as follows: n L c. (x...

  19. Statistics of spatially integrated speckle intensity difference

    DEFF Research Database (Denmark)

    Hanson, Steen Grüner; Yura, Harold

    2009-01-01

    We consider the statistics of the spatially integrated speckle intensity difference obtained from two separated finite collecting apertures. For fully developed speckle, closed-form analytic solutions for both the probability density function and the cumulative distribution function are derived...... here for both arbitrary values of the mean number of speckles contained within an aperture and the degree of coherence of the optical field. Additionally, closed-form expressions are obtained for the corresponding nth statistical moments....

  20. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Science.gov (United States)

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  1. Response function during oxygen sputter profiling and its application to deconvolution of ultrashallow B depth profiles in Si

    International Nuclear Information System (INIS)

    Shao Lin; Liu Jiarui; Wang Chong; Ma, Ki B.; Zhang Jianming; Chen, John; Tang, Daniel; Patel, Sanjay; Chu Weikan

    2003-01-01

    The secondary ion mass spectrometry (SIMS) response function to a B 'δ surface layer' has been investigated. Using electron-gun evaporation combined with liquid nitrogen cooling of target, we are able to deposit an ultrathin B layer without detectable island formation. The B spatial distribution obtained from SIMS is exponentially decaying with a decay length approximately a linear function of the incident energy of the oxygen during the SIMS analysis. Deconvolution with the response function has been applied to reconstruct the spatial distribution of ultra-low-energy B implants. A correction to depth and yield scales due to transient sputtering near the Si surface region was also applied. Transient erosion shifts the profile shallower, but beam mixing shifts it deeper. These mutually compensating effects make the adjusted distribution almost the same as original data. The one significant difference is a buried B peak observed near the surface region

  2. Statistical deconvolution of enthalpic energetic contributions to MHC-peptide binding affinity

    Directory of Open Access Journals (Sweden)

    Drew Michael GB

    2006-03-01

    Full Text Available Abstract Background MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results A large dataset comprising MHC-peptide structural complexes was created by re-modelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion The QSAR techniques of Genetic Function Approximation (GFA and Genetic Partial Least Squares (G/PLS algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.

  3. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  4. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  5. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  6. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    Science.gov (United States)

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  7. Functional speciation of metal-dissolved organic matter complexes by size exclusion chromatography coupled to inductively coupled plasma mass spectrometry and deconvolution analysis

    International Nuclear Information System (INIS)

    Laborda, Francisco; Ruiz-Begueria, Sergio; Bolea, Eduardo; Castillo, Juan R.

    2009-01-01

    High performance size exclusion chromatography coupled to inductively coupled plasma mass spectrometry (HP-SEC-ICP-MS), in combination with deconvolution analysis, has been used to obtain multielemental qualitative and quantitative information about the distributions of metal complexes with different forms of natural dissolved organic matter (DOM). High performance size exclusion chromatography coupled to inductively coupled plasma mass spectrometry chromatograms only provide continuous distributions of metals with respect to molecular masses, due to the high heterogeneity of dissolved organic matter, which consists of humic substances as well as biomolecules and other organic compounds. A functional speciation approach, based on the determination of the metals associated to different groups of homologous compounds, has been followed. Dissolved organic matter groups of homologous compounds are isolated from the aqueous samples under study and their high performance size exclusion chromatography coupled to inductively coupled plasma mass spectrometry elution profiles fitted to model Gaussian peaks, characterized by their respective retention times and peak widths. High performance size exclusion chromatography coupled to inductively coupled plasma mass spectrometry chromatograms of the samples are deconvoluted with respect to these model Gaussian peaks. This methodology has been applied to the characterization of metal-dissolved organic matter complexes in compost leachates. The most significant groups of homologous compounds involved in the complexation of metals in the compost leachates studied have been hydrophobic acids (humic and fulvic acids) and low molecular mass hydrophilic compounds. The environmental significance of these compounds is related to the higher biodegradability of the low molecular mass hydrophilic compounds and the lower mobility of humic acids. In general, the hydrophilic compounds accounted for the complexation of around 50% of the leached

  8. New statistical model of inelastic fast neutron scattering

    International Nuclear Information System (INIS)

    Stancicj, V.

    1975-07-01

    A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations

  9. An improved Fuzzy Kappa statistic that accounts for spatial autocorrelation

    NARCIS (Netherlands)

    Hagen - Zanker, A.H.

    2009-01-01

    The Fuzzy Kappa statistic expresses the agreement between two categorical raster maps. The statistic goes beyond cell-by-cell comparison and gives partial credit to cells based on the categories found in the neighborhood. When matching categories are found at shorter distances the agreement is

  10. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    Science.gov (United States)

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  11. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  12. What is the value of official statistics and how do we communicate that value?

    Directory of Open Access Journals (Sweden)

    Tudorel ANDREI

    2014-09-01

    Full Text Available Einstein’s aphorism mentioned above wasn’t meant to colour the text, but we particularly believe it corresponds, in a way, to the topic we plan to introduce during the seminar. Consequently, paraphrasing it, the aphorism suggests a derived one which could read: “Not any statistics is official statistics and not any official statement that contains a numerical expression is statistics”. As to the above statements, the following question normally arises: “if not any statistics is official statistics, than what does official statistics mean and where does this brand, that represents a special value of statistics, come from?” On the other hand, if not any official statement, that contains a numerical expression on a certain economic or social phenomenon, is statistics, then what kind of meaning does it have?

  13. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Cyclin d1 expression in odontogenic cysts.

    Science.gov (United States)

    Taghavi, Nasim; Modabbernia, Shirin; Akbarzadeh, Alireza; Sajjadi, Samad

    2013-01-01

    In the present study expression of cyclin D1 in the epithelial lining of odontogenic keratocyst, radicular cyst, dentigerous cyst and glandular odontogenic cyst was investigated to compare proliferative activity in these lesions. Immunohistochemical staining of cyclin D1 on formalin-fixed, paraffin-embedded tissue sections of odontogenic keratocysts (n=23), dentigerous cysts (n=20), radicular cysts (n=20) and glandular odontogenic cysts (n=5) was performed by standard EnVision method. Then, slides were studied to evaluate the following parameters in epithelial lining of cysts: expression, expression pattern, staining intensity and localization of expression. The data analysis showed statistically significant difference in cyclin D1 expression in studied groups (p keratocysts, but difference was not statistically significant among groups respectively (p=0.204, 0.469). Considering expression localization, cyclin D1 positive cells in odontogenic keratocysts and dentigerous cysts were frequently confined in parabasal layer, different from radicular cysts and glandular odontogenic cysts. The difference was statistically significant (p keratocyst and the entire cystic epithelium of glandular odontogenic cysts comparing to dentigerous cysts and radicular cysts, implying the possible role of G1-S cell cycle phase disturbances in the aggressiveness of odontogenic keratocyst and glandular odontogenic cyst.

  15. Statistical approach for selection of biologically informative genes.

    Science.gov (United States)

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes

  16. PHYSICS OF NON-GAUSSIAN FIELDS AND THE COSMOLOGICAL GENUS STATISTIC

    International Nuclear Information System (INIS)

    James, J. Berian

    2012-01-01

    We report a technique to calculate the impact of distinct physical processes inducing non-Gaussianity on the cosmological density field. A natural decomposition of the cosmic genus statistic into an orthogonal polynomial sequence allows complete expression of the scale-dependent evolution of the topology of large-scale structure, in which effects including galaxy bias, nonlinear gravitational evolution, and primordial non-Gaussianity may be delineated. The relationship of this decomposition to previous methods for analyzing the genus statistic is briefly considered and the following applications are made: (1) the expression of certain systematics affecting topological measurements, (2) the quantification of broad deformations from Gaussianity that appear in the genus statistic as measured in the Horizon Run simulation, and (3) the study of the evolution of the genus curve for simulations with primordial non-Gaussianity. These advances improve the treatment of flux-limited galaxy catalogs for use with this measurement and further the use of the genus statistic as a tool for exploring non-Gaussianity.

  17. A method for express estimation of the octane number of gasoline using a portable spectroimpedance meter and statistical analysis methods

    Directory of Open Access Journals (Sweden)

    Mamykin A. V.

    2017-10-01

    Full Text Available The authors propose a method for determination of the electro-physical characteristics of electrical insulating liquids on the example of different types of gasoline. The method is based on the spectral impedance measurements of a capacitor electrochemical cell filled with the liquid under study. The application of sinusoidal test voltage in the frequency range of 0,1—10 Hz provides more accurate measurements in comparison with known traditional methods. A portable device for measuring total electrical resistance (impedance of dielectric liquids was designed and constructed. An approach for express estimation of octane number of automobile gasoline using spectroimpedance measurements and statistical multi variation methods of data analysis has been proposed and tested.

  18. Statistical-mechanical formulation of Lyapunov exponents

    International Nuclear Information System (INIS)

    Tanase-Nicola, Sorin; Kurchan, Jorge

    2003-01-01

    We show how the Lyapunov exponents of a dynamic system can, in general, be expressed in terms of the free energy of a (non-Hermitian) quantum many-body problem. This puts their study as a problem of statistical mechanics, whose intuitive concepts and techniques of approximation can hence be borrowed

  19. Statistical Considerations for Immunohistochemistry Panel Development after Gene Expression Profiling of Human Cancers

    Science.gov (United States)

    Betensky, Rebecca A.; Nutt, Catherine L.; Batchelor, Tracy T.; Louis, David N.

    2005-01-01

    In recent years there have been a number of microarray expression studies in which different types of tumors were classified by identifying a panel of differentially expressed genes. Immunohistochemistry is a practical and robust method for extending gene expression data to common pathological specimens with the advantage of being applicable to paraffin-embedded tissues. However, the number of assays required for successful immunohistochemical classification remains unclear. We propose a simulation-based method for assessing sample size for an immunohistochemistry investigation after a promising gene expression study of human tumors. The goals of such an immunohistochemistry study would be to develop and validate a marker panel that yields improved prognostic classification of cancer patients. We demonstrate how the preliminary gene expression data, coupled with certain realistic assumptions, can be used to estimate the number of immunohistochemical assays required for development. These assumptions are more tenable than alternative assumptions that would be required for crude analytic sample size calculations and that may yield underpowered and inefficient studies. We applied our methods to the design of an immunohistochemistry study for glioma classification and estimated the number of assays required to ensure satisfactory technical and prognostic validation. Simulation approaches for computing power and sample size that are based on existing gene expression data provide a powerful tool for efficient design of follow-up genomic studies. PMID:15858152

  20. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    Directory of Open Access Journals (Sweden)

    Roerdink Jos BTM

    2008-04-01

    Full Text Available Abstract Background We present a simple, data-driven method to extract haemodynamic response functions (HRF from functional magnetic resonance imaging (fMRI time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs. Results ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling. The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series. Conclusion With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.

  1. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  2. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  3. Punctuated Equilibrium in Statistical Models of Generalized Coevolutionary Resilience: How Sudden Ecosystem Transitions Can Entrain Both Phenotype Expression and Darwinian Selection

    Science.gov (United States)

    Wallace, Rodrick; Wallace, Deborah

    We argue that mesoscale ecosystem resilience shifts akin to sudden phase transitions in physical systems can entrain similarly punctuated events of gene expression on more rapid time scales, and, in part through such means, slower changes induced by selection pressure, triggering punctuated equilibrium Darwinian evolutionary transitions on geologic time scales. The approach reduces ecosystem, gene expression, and Darwinian genetic dynamics to a least common denominator of information sources interacting by crosstalk at markedly differing rates. Pettini's 'topological hypothesis', via a homology between information source uncertainty and free energy density, generates a regression-like class of statistical models of sudden coevolutionary phase transition based on the Rate Distortion and Shannon-McMillan Theorems of information theory which links all three levels. A mathematical treatment of Holling's extended keystone hypothesis regarding the particular role of mesoscale phenomena in entraining both slower and faster dynamical structures produces the result. A main theme is the necessity of a cognitive paradigm for gene expression, mirroring I. Cohen's cognitive approach to immune function. Invocation of the necessary conditions imposed by the asymptotic limit theorems of communication theory enables us to penetrate one layer more deeply before needing to impose an empirically-derived phenomenological system of 'Onsager relation' recursive coevolutionary stochastic differential equations. Extending the development to second order via a large deviations argument permits modeling the influence of human cultural structures on ecosystems as 'farming'.

  4. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  5. Using the Statistical Indicators for the General Insurances Activity

    Directory of Open Access Journals (Sweden)

    Ion Partachi

    2007-04-01

    Full Text Available The statistics of the general insurances activity is largely used in the actuarial calculations. The actuarial analysis are achieved exclusively on the basis of primary and derived indicators, which are drawn up by various statistical methods. The statistical indicators which are used in this respect are obtained on the basis of the factors and conditions allowing the compensation cases to occur.The actuarial analysis is performed over the time as well, by using the chronological which allow the decomposition of the phenomenon being studied by its factors of influence.In this article, after briefly presenting a number of point of view regarding the utilization of the statistical indicators in the actuarial analysis, we have analyzed, successively, a series of issues, such as: the statistical indicators as regards the general insurances fund forming, expressed in physical and value units, or as absolute, relative and average volumes; the statistical indicators of the utilization of the general insurances funds (with the same diversified form of expression and the statistical indicators of the outcomes of the general insurances activity.A particular accent went to the underlying of certain methodological aspects regarding the calculation of the above mentioned indicators, emphasizing certain particular characteristics concerning their utilization in the frame of the actuarial analysis.The article is stressing the clarification of the fact that these indicators are used in the actuarial analysis as a real system. The respective proportions are enumerated, by underlying the concrete possibilities of computation, which secure the possibility of performing the necessary analysis involved by a decisional process.

  6. The use of deconvolution techniques to identify the fundamental mixing characteristics of urban drainage structures.

    Science.gov (United States)

    Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G

    2010-01-01

    Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.

  7. Semiclassical statistical mechanics

    International Nuclear Information System (INIS)

    Stratt, R.M.

    1979-04-01

    On the basis of an approach devised by Miller, a formalism is developed which allows the nonperturbative incorporation of quantum effects into equilibrium classical statistical mechanics. The resulting expressions bear a close similarity to classical phase space integrals and, therefore, are easily molded into forms suitable for examining a wide variety of problems. As a demonstration of this, three such problems are briefly considered: the simple harmonic oscillator, the vibrational state distribution of HCl, and the density-independent radial distribution function of He 4 . A more detailed study is then made of two more general applications involving the statistical mechanics of nonanalytic potentials and of fluids. The former, which is a particularly difficult problem for perturbative schemes, is treated with only limited success by restricting phase space and by adding an effective potential. The problem of fluids, however, is readily found to yield to a semiclassical pairwise interaction approximation, which in turn permits any classical many-body model to be expressed in a convenient form. The remainder of the discussion concentrates on some ramifications of having a phase space version of quantum mechanics. To test the breadth of the formulation, the task of constructing quantal ensemble averages of phase space functions is undertaken, and in the process several limitations of the formalism are revealed. A rather different approach is also pursued. The concept of quantum mechanical ergodicity is examined through the use of numerically evaluated eigenstates of the Barbanis potential, and the existence of this quantal ergodicity - normally associated with classical phase space - is verified. 21 figures, 4 tables

  8. Classical model of intermediate statistics

    International Nuclear Information System (INIS)

    Kaniadakis, G.

    1994-01-01

    In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions

  9. Thermodynamic properties of particles with intermediate statistics

    International Nuclear Information System (INIS)

    Joyce, G.S.; Sarkar, S.; Spal/ek, J.; Byczuk, K.

    1996-01-01

    Analytic expressions for the distribution function of an ideal gas of particles (exclusons) which have statistics intermediate between Fermi-Dirac and Bose-Einstein are obtained for all values of the Haldane statistics parameter α element-of[0,1]. The analytic structure of the distribution function is investigated and found to have no singularities in the physical region when the parameter α lies in the range 0 V of the D-dimensional excluson gas. The low-temperature series for the thermodynamic properties illustrate the pseudofermion nature of exclusons. copyright 1996 The American Physical Society

  10. statistical fluid theory for associating fluids containing alternating ...

    Indian Academy of Sciences (India)

    Statistical associating fluid theory of homonuclear dimerized chain fluids and homonuclear ... The proposed models account for the appropriate .... where gHNM(1,1) is the expression for the contact value of the correlation func- tion of two ...

  11. PDE7B is a novel, prognostically significant mediator of glioblastoma growth whose expression is regulated by endothelial cells.

    Directory of Open Access Journals (Sweden)

    Michael D Brooks

    Full Text Available Cell-cell interactions between tumor cells and constituents of their microenvironment are critical determinants of tumor tissue biology and therapeutic responses. Interactions between glioblastoma (GBM cells and endothelial cells (ECs establish a purported cancer stem cell niche. We hypothesized that genes regulated by these interactions would be important, particularly as therapeutic targets. Using a computational approach, we deconvoluted expression data from a mixed physical co-culture of GBM cells and ECs and identified a previously undescribed upregulation of the cAMP specific phosphodiesterase PDE7B in GBM cells in response to direct contact with ECs. We further found that elevated PDE7B expression occurs in most GBM cases and has a negative effect on survival. PDE7B overexpression resulted in the expansion of a stem-like cell subpopulation in vitro and increased tumor growth and aggressiveness in an in vivo intracranial GBM model. Collectively these studies illustrate a novel approach for studying cell-cell interactions and identifying new therapeutic targets like PDE7B in GBM.

  12. Study of the inner part of the β pictoris dust disk: deconvolution of 10 microns images and modelization of the dust emission

    International Nuclear Information System (INIS)

    Pantin, Eric

    1996-01-01

    In 1984, the observations of the infrared satellite IRAS showed that numerous main-sequence stars are surrounded by a relatively tenuous dust disk. The most studied example is the disk of the star Beta Pictoris. The corono-graphic observations are limited to the most outer regions of the disk. In infrared, it is not the case. We have used an infrared camera to obtain 10 microns images of the central regions. In order to be able to deduce the dust density, one has to fulfil some requirements. First, we had to de-convolute these images degraded by a combination of diffraction and seeing. We initially used standard methods (Richardson-Lucy, Maximum Entropy etc..), then we have developed a new method of astronomical images deconvolution and filtering based on a regularization by Multi-Scales Maximum Entropy. Then we have built a model of thermal emission of the dust to calculate the temperature of the grains. The resulting density shows a region between the star and 50-60 Astronomical Units, depleted of dust. The density is compatible with models simulating the gravitational interactions between such a disk and a planet having a mass the half of Saturn's mass. We have refined the models of the particles' emission: mixture of several materials, porous particles or not, coated with ice or not, to build a global model of the disk taking into account all the observables: IRAS infrared fluxes, 10 and 20 microns fluxes, 10 microns spectrum, scattered fluxes in the visible. In our best model, the particles are porous silicate grains (mixture of olivine and pyroxene) coated with a refractory organics mantle, which becomes 'frozen' (coated with ice) beyond a distance of 90 Astronomical Units from the star. This model allows us to predict an infrared spectrum showing the characteristic emission of the ice around 45-50 microns, that will be compared to the observations of the infrared satellite ISO. (author) [fr

  13. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  14. A statistical method for predicting splice variants between two groups of samples using GeneChip® expression array data

    Directory of Open Access Journals (Sweden)

    Olson James M

    2006-04-01

    Full Text Available Abstract Background Alternative splicing of pre-messenger RNA results in RNA variants with combinations of selected exons. It is one of the essential biological functions and regulatory components in higher eukaryotic cells. Some of these variants are detectable with the Affymetrix GeneChip® that uses multiple oligonucleotide probes (i.e. probe set, since the target sequences for the multiple probes are adjacent within each gene. Hybridization intensity from a probe correlates with abundance of the corresponding transcript. Although the multiple-probe feature in the current GeneChip® was designed to assess expression values of individual genes, it also measures transcriptional abundance for a sub-region of a gene sequence. This additional capacity motivated us to develop a method to predict alternative splicing, taking advance of extensive repositories of GeneChip® gene expression array data. Results We developed a two-step approach to predict alternative splicing from GeneChip® data. First, we clustered the probes from a probe set into pseudo-exons based on similarity of probe intensities and physical adjacency. A pseudo-exon is defined as a sequence in the gene within which multiple probes have comparable probe intensity values. Second, for each pseudo-exon, we assessed the statistical significance of the difference in probe intensity between two groups of samples. Differentially expressed pseudo-exons are predicted to be alternatively spliced. We applied our method to empirical data generated from GeneChip® Hu6800 arrays, which include 7129 probe sets and twenty probes per probe set. The dataset consists of sixty-nine medulloblastoma (27 metastatic and 42 non-metastatic samples and four cerebellum samples as normal controls. We predicted that 577 genes would be alternatively spliced when we compared normal cerebellum samples to medulloblastomas, and predicted that thirteen genes would be alternatively spliced when we compared metastatic

  15. Incomplete nonextensive statistics and the zeroth law of thermodynamics

    International Nuclear Information System (INIS)

    Huang Zhi-Fu; Ou Cong-Jie; Chen Jin-Can

    2013-01-01

    On the basis of the entropy of incomplete statistics (IS) and the joint probability factorization condition, two controversial problems existing in IS are investigated: one is what expression of the internal energy is reasonable for a composite system and the other is whether the traditional zeroth law of thermodynamics is suitable for IS. Some new equivalent expressions of the internal energy of a composite system are derived through accurate mathematical calculation. Moreover, a self-consistent calculation is used to expound that the zeroth law of thermodynamics is also suitable for IS, but it cannot be proven theoretically. Finally, it is pointed out that the generalized zeroth law of thermodynamics for incomplete nonextensive statistics is unnecessary and the nonextensive assumptions for the composite internal energy will lead to mathematical contradiction. (general)

  16. Gene Expression Commons: an open platform for absolute gene expression profiling.

    Directory of Open Access Journals (Sweden)

    Jun Seita

    Full Text Available Gene expression profiling using microarrays has been limited to comparisons of gene expression between small numbers of samples within individual experiments. However, the unknown and variable sensitivities of each probeset have rendered the absolute expression of any given gene nearly impossible to estimate. We have overcome this limitation by using a very large number (>10,000 of varied microarray data as a common reference, so that statistical attributes of each probeset, such as the dynamic range and threshold between low and high expression, can be reliably discovered through meta-analysis. This strategy is implemented in a web-based platform named "Gene Expression Commons" (https://gexc.stanford.edu/ which contains data of 39 distinct highly purified mouse hematopoietic stem/progenitor/differentiated cell populations covering almost the entire hematopoietic system. Since the Gene Expression Commons is designed as an open platform, investigators can explore the expression level of any gene, search by expression patterns of interest, submit their own microarray data, and design their own working models representing biological relationship among samples.

  17. Spectral statistics of chaotic many-body systems

    International Nuclear Information System (INIS)

    Dubertrand, Rémy; Müller, Sebastian

    2016-01-01

    We derive a trace formula that expresses the level density of chaotic many-body systems as a smooth term plus a sum over contributions associated to solutions of the nonlinear Schrödinger (or Gross–Pitaevski) equation. Our formula applies to bosonic systems with discretised positions, such as the Bose–Hubbard model, in the semiclassical limit as well as in the limit where the number of particles is taken to infinity. We use the trace formula to investigate the spectral statistics of these systems, by studying interference between solutions of the nonlinear Schrödinger equation. We show that in the limits taken the statistics of fully chaotic many-particle systems becomes universal and agrees with predictions from the Wigner–Dyson ensembles of random matrix theory. The conditions for Wigner–Dyson statistics involve a gap in the spectrum of the Frobenius–Perron operator, leaving the possibility of different statistics for systems with weaker chaotic properties. (paper)

  18. Effectively identifying regulatory hotspots while capturing expression heterogeneity in gene expression studies

    Science.gov (United States)

    2014-01-01

    Expression quantitative trait loci (eQTL) mapping is a tool that can systematically identify genetic variation affecting gene expression. eQTL mapping studies have shown that certain genomic locations, referred to as regulatory hotspots, may affect the expression levels of many genes. Recently, studies have shown that various confounding factors may induce spurious regulatory hotspots. Here, we introduce a novel statistical method that effectively eliminates spurious hotspots while retaining genuine hotspots. Applied to simulated and real datasets, we validate that our method achieves greater sensitivity while retaining low false discovery rates compared to previous methods. PMID:24708878

  19. High-spatial-resolution localization algorithm based on cascade deconvolution in a distributed Sagnac interferometer invasion monitoring system.

    Science.gov (United States)

    Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi

    2016-10-10

    In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106  s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.

  20. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  1. Lower Bmi-1 Expression May Predict Longer Survival of Colon Cancer Patients

    Directory of Open Access Journals (Sweden)

    Xiaodong Li

    2016-11-01

    Full Text Available Background: This study aimed to investigate the Bmi-1 expression and the clinical significance in colon cancer (CC. Patients and Methods: Bmi-1 expression in tumor tissue and the corresponding normal tissue was detected using immunohistological staining. The correlations between Bmi-1 expression and clinicopathological characteristics and the overall survival (OS time were analyzed. Results: The median H-scores of Bmi-1 in CC tissues and the corresponding tissues were 80.0 (0-270 and 5.0 (0-90, with no statistically significant difference (Z=-13.7, PP = 0.123. The survival rates of patients with low Bmi-1 expression were higher than those of patients with high Bmi-1 expression but the differences were not statistically significant. Conclusion: Bmi-1 expression in CC tissue is significantly higher than that in corresponding normal tissue. While there may be a trend towards improved survival, this is not statistically significant.

  2. Isotropic non-white matter partial volume effects in constrained spherical deconvolution

    Directory of Open Access Journals (Sweden)

    Timo eRoine

    2014-03-01

    Full Text Available Diffusion-weighted (DW magnetic resonance imaging (MRI is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM of the brain. Significant partial volume effects (PVE are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM and cerebrospinal fluid (CSF, and by multiple nonparallel WM fiber populations. High angular resolution diffusion imaging (HARDI methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD. Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNR, fiber configurations, and tissue fractions.Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50 % of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50 % GM volume for maximum spherical harmonics orders of 8 and below, and already with 25 % GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm2, reasonable SNR (~30 and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs

  3. Isotropic non-white matter partial volume effects in constrained spherical deconvolution.

    Science.gov (United States)

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan

    2014-01-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.

  4. Statistical comparison of two or more SAGE libraries: one tag at a time

    NARCIS (Netherlands)

    Schaaf, Gerben J.; van Ruissen, Fred; van Kampen, Antoine; Kool, Marcel; Ruijter, Jan M.

    2008-01-01

    Several statistical tests have been introduced for the comparison of serial analysis of gene expression (SAGE) libraries to quantitatively analyze the differential expression of genes. As each SAGE library is only one measurement, the necessary information on biological variation or experimental

  5. SpaSM: A MATLAB Toolbox for Sparse Statistical Modeling

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Clemmensen, Line Harder; Larsen, Rasmus

    2018-01-01

    Applications in biotechnology such as gene expression analysis and image processing have led to a tremendous development of statistical methods with emphasis on reliable solutions to severely underdetermined systems. Furthermore, interpretations of such solutions are of importance, meaning...

  6. Deconvoluting complex tissues for expression quantitative trait locus-based analyses

    DEFF Research Database (Denmark)

    Seo, Ji-Heui; Li, Qiyuan; Fatima, Aquila

    2013-01-01

    Breast cancer genome-wide association studies have pinpointed dozens of variants associated with breast cancer pathogenesis. The majority of risk variants, however, are located outside of known protein-coding regions. Therefore, identifying which genes the risk variants are acting through present...

  7. Counting statistics of many-particle quantum walks

    Science.gov (United States)

    Mayer, Klaus; Tichy, Malte C.; Mintert, Florian; Konrad, Thomas; Buchleitner, Andreas

    2011-06-01

    We study quantum walks of many noninteracting particles on a beam splitter array as a paradigmatic testing ground for the competition of single- and many-particle interference in a multimode system. We derive a general expression for multimode particle-number correlation functions, valid for bosons and fermions, and infer pronounced signatures of many-particle interferences in the counting statistics.

  8. Counting statistics of many-particle quantum walks

    International Nuclear Information System (INIS)

    Mayer, Klaus; Tichy, Malte C.; Buchleitner, Andreas; Mintert, Florian; Konrad, Thomas

    2011-01-01

    We study quantum walks of many noninteracting particles on a beam splitter array as a paradigmatic testing ground for the competition of single- and many-particle interference in a multimode system. We derive a general expression for multimode particle-number correlation functions, valid for bosons and fermions, and infer pronounced signatures of many-particle interferences in the counting statistics.

  9. Decoupling Linear and Nonlinear Associations of Gene Expression

    KAUST Repository

    Itakura, Alan

    2013-05-01

    The FANTOM consortium has generated a large gene expression dataset of different cell lines and tissue cultures using the single-molecule sequencing technology of HeliscopeCAGE. This provides a unique opportunity to investigate novel associations between gene expression over time and different cell types. Here, we create a MatLab wrapper for a powerful and computationally intensive set of statistics known as Maximal Information Coefficient, and then calculate this statistic for a large, comprehensive dataset containing gene expression of a variety of differentiating tissues. We then distinguish between linear and nonlinear associations, and then create gene association networks. Following this analysis, we are then able to identify clusters of linear gene associations that then associate nonlinearly with other clusters of linearity, providing insight to much more complex connections between gene expression patterns than previously anticipated.

  10. Decoupling Linear and Nonlinear Associations of Gene Expression

    KAUST Repository

    Itakura, Alan

    2013-01-01

    The FANTOM consortium has generated a large gene expression dataset of different cell lines and tissue cultures using the single-molecule sequencing technology of HeliscopeCAGE. This provides a unique opportunity to investigate novel associations between gene expression over time and different cell types. Here, we create a MatLab wrapper for a powerful and computationally intensive set of statistics known as Maximal Information Coefficient, and then calculate this statistic for a large, comprehensive dataset containing gene expression of a variety of differentiating tissues. We then distinguish between linear and nonlinear associations, and then create gene association networks. Following this analysis, we are then able to identify clusters of linear gene associations that then associate nonlinearly with other clusters of linearity, providing insight to much more complex connections between gene expression patterns than previously anticipated.

  11. Deconvolution of the thermoluminescent emission curve. Second order kinetics

    International Nuclear Information System (INIS)

    Moreno y M, A.; Moreno B, A.

    1999-01-01

    In this work it is described the Randall and Wilkins second order kinetics in Microsoft Excel language, which allows its expression as the sum of Gaussian and the correction factors corresponding. These factors are obtained of the differences between the real thermoluminescent curve and the Gaussian proposed. The results obtained justify the Gaussian expression added to the correction factor. (Author)

  12. Optimization of Soluble Expression and Purification of Recombinant Human Rhinovirus Type-14 3C Protease Using Statistically Designed Experiments: Isolation and Characterization of the Enzyme.

    Science.gov (United States)

    Antoniou, Georgia; Papakyriacou, Irineos; Papaneophytou, Christos

    2017-10-01

    Human rhinovirus (HRV) 3C protease is widely used in recombinant protein production for various applications such as biochemical characterization and structural biology projects to separate recombinant fusion proteins from their affinity tags in order to prevent interference between these tags and the target proteins. Herein, we report the optimization of expression and purification conditions of glutathione S-transferase (GST)-tagged HRV 3C protease by statistically designed experiments. Soluble expression of GST-HRV 3C protease was initially optimized by response surface methodology (RSM), and a 5.5-fold increase in enzyme yield was achieved. Subsequently, we developed a new incomplete factorial (IF) design that examines four variables (bacterial strain, expression temperature, induction time, and inducer concentration) in a single experiment. The new design called Incomplete Factorial-Strain/Temperature/Time/Inducer (IF-STTI) was validated using three GST-tagged proteins. In all cases, IF-STTI resulted in only 10% lower expression yields than those obtained by RSM. Purification of GST-HRV 3C was optimized by an IF design that examines simultaneously the effect of the amount of resin, incubation time of cell lysate with resin, and glycerol and DTT concentration in buffers, and a further 15% increase in protease recovery was achieved. Purified GST-HRV 3C protease was active at both 4 and 25 °C in a variety of buffers.

  13. Determination of the Projected Atomic Potential by Deconvolution of the Auto-Correlation Function of TEM Electron Nano-Diffraction Patterns

    Directory of Open Access Journals (Sweden)

    Liberato De Caro

    2016-11-01

    Full Text Available We present a novel method to determine the projected atomic potential of a specimen directly from transmission electron microscopy coherent electron nano-diffraction patterns, overcoming common limitations encountered so far due to the dynamical nature of electron-matter interaction. The projected potential is obtained by deconvolution of the inverse Fourier transform of experimental diffraction patterns rescaled in intensity by using theoretical values of the kinematical atomic scattering factors. This novelty enables the compensation of dynamical effects typical of transmission electron microscopy (TEM experiments on standard specimens with thicknesses up to a few tens of nm. The projected atomic potentials so obtained are averaged on sample regions illuminated by nano-sized electron probes and are in good quantitative agreement with theoretical expectations. Contrary to lens-based microscopy, here the spatial resolution in the retrieved projected atomic potential profiles is related to the finer lattice spacing measured in the electron diffraction pattern. The method has been successfully applied to experimental nano-diffraction data of crystalline centrosymmetric and non-centrosymmetric specimens achieving a resolution of 65 pm.

  14. Dynamic association rules for gene expression data analysis.

    Science.gov (United States)

    Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung

    2015-10-14

    The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed

  15. Completely X-symmetric S-matrices corresponding to theta functions and models of statistical mechanics

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.; Chudnovsky, G.V.

    1981-01-01

    We consider general expressions of factorized S-matrices with Abelian symmetry expressed in terms of theta-functions. These expressions arise from representations of the Heisenberg group. New examples of factorized S-matrices lead to a large class of completely integrable models of statistical mechanics which generalize the XYZ-model of the eight-vertex model. (orig.)

  16. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  17. Synaptic Transmission Optimization Predicts Expression Loci of Long-Term Plasticity.

    Science.gov (United States)

    Costa, Rui Ponte; Padamsey, Zahid; D'Amour, James A; Emptage, Nigel J; Froemke, Robert C; Vogels, Tim P

    2017-09-27

    Long-term modifications of neuronal connections are critical for reliable memory storage in the brain. However, their locus of expression-pre- or postsynaptic-is highly variable. Here we introduce a theoretical framework in which long-term plasticity performs an optimization of the postsynaptic response statistics toward a given mean with minimal variance. Consequently, the state of the synapse at the time of plasticity induction determines the ratio of pre- and postsynaptic modifications. Our theory explains the experimentally observed expression loci of the hippocampal and neocortical synaptic potentiation studies we examined. Moreover, the theory predicts presynaptic expression of long-term depression, consistent with experimental observations. At inhibitory synapses, the theory suggests a statistically efficient excitatory-inhibitory balance in which changes in inhibitory postsynaptic response statistics specifically target the mean excitation. Our results provide a unifying theory for understanding the expression mechanisms and functions of long-term synaptic transmission plasticity. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Influence of the beam divergence on the quality neutron radiographic images improved by Richardson-Lucy deconvolution

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2010-01-01

    Full text: Images produced by radiation transmission, as many others, are affected by disturbances caused by random and systematic uncertainties. Those caused by noise or statistical dispersion can be diminished by a filtering procedure which eliminates high-frequencies associated to the noise, but unfortunately also those belonging to the signal itself. Systematic uncertainties, in principle, could be more effectively removed if one knows the spoiling convolution function causing the degradation of the image. This function depends upon the detector resolution and the non-punctual character of the source employed in the acquisition, which blur the image making a single point to appear as a spot with a vanishing edge. For an extended source, exhibiting however a reasonable parallel beam, the penumbra degrading the image would be caused by the unavoidable beam divergence. In both cases, the essential information to improve the degraded image is the law of transformation of a single point into a blurred spot, known as point spread function-PSF. Even for an isotropic system, where this function would have a symmetric bell-like shape, it is very difficult to obtain experimentally and to apply it to the data processing. For this reason it is usually replaced by an approximated analytical function such as a Gaussian or Lorentzian. In this work, the Richardson-Lucy deconvoultion has been applied to ameliorate thermal neutron radiographic images acquired with imaging plates using a Gaussian PSF as deconvolutor. Due to the divergence of the neutron beam, reaching 1 deg 16', the penumbra affecting the final image depends upon the gap object-detector. Moreover, even if the object were placed in direct contact with the detector the non-zero dimension of the object along the beam path would produce penumbrae of different magnitudes, i.e., the spatial resolution of the system would be dependent upon the object-detector arrangement. This means that the width of the PSF increases

  19. Improved Peak Detection and Deconvolution of Native Electrospray Mass Spectra from Large Protein Complexes.

    Science.gov (United States)

    Lu, Jonathan; Trnka, Michael J; Roh, Soung-Hun; Robinson, Philip J J; Shiau, Carrie; Fujimori, Danica Galonic; Chiu, Wah; Burlingame, Alma L; Guan, Shenheng

    2015-12-01

    Native electrospray-ionization mass spectrometry (native MS) measures biomolecules under conditions that preserve most aspects of protein tertiary and quaternary structure, enabling direct characterization of large intact protein assemblies. However, native spectra derived from these assemblies are often partially obscured by low signal-to-noise as well as broad peak shapes because of residual solvation and adduction after the electrospray process. The wide peak widths together with the fact that sequential charge state series from highly charged ions are closely spaced means that native spectra containing multiple species often suffer from high degrees of peak overlap or else contain highly interleaved charge envelopes. This situation presents a challenge for peak detection, correct charge state and charge envelope assignment, and ultimately extraction of the relevant underlying mass values of the noncovalent assemblages being investigated. In this report, we describe a comprehensive algorithm developed for addressing peak detection, peak overlap, and charge state assignment in native mass spectra, called PeakSeeker. Overlapped peaks are detected by examination of the second derivative of the raw mass spectrum. Charge state distributions of the molecular species are determined by fitting linear combinations of charge envelopes to the overall experimental mass spectrum. This software is capable of deconvoluting heterogeneous, complex, and noisy native mass spectra of large protein assemblies as demonstrated by analysis of (1) synthetic mononucleosomes containing severely overlapping peaks, (2) an RNA polymerase II/α-amanitin complex with many closely interleaved ion signals, and (3) human TriC complex containing high levels of background noise. Graphical Abstract ᅟ.

  20. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  1. A statistical mechanical approach to restricted integer partition functions

    Science.gov (United States)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-05-01

    The main aim of this paper is twofold: (1) suggesting a statistical mechanical approach to the calculation of the generating function of restricted integer partition functions which count the number of partitions—a way of writing an integer as a sum of other integers under certain restrictions. In this approach, the generating function of restricted integer partition functions is constructed from the canonical partition functions of various quantum gases. (2) Introducing a new type of restricted integer partition functions corresponding to general statistics which is a generalization of Gentile statistics in statistical mechanics; many kinds of restricted integer partition functions are special cases of this restricted integer partition function. Moreover, with statistical mechanics as a bridge, we reveal a mathematical fact: the generating function of restricted integer partition function is just the symmetric function which is a class of functions being invariant under the action of permutation groups. Using this approach, we provide some expressions of restricted integer partition functions as examples.

  2. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  3. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  4. ADAP-GC 3.0: Improved Peak Detection and Deconvolution of Co-eluting Metabolites from GC/TOF-MS Data for Metabolomics Studies.

    Science.gov (United States)

    Ni, Yan; Su, Mingming; Qiu, Yunping; Jia, Wei; Du, Xiuxia

    2016-09-06

    ADAP-GC is an automated computational pipeline for untargeted, GC/MS-based metabolomics studies. It takes raw mass spectrometry data as input and carries out a sequence of data processing steps including construction of extracted ion chromatograms, detection of chromatographic peak features, deconvolution of coeluting compounds, and alignment of compounds across samples. Despite the increased accuracy from the original version to version 2.0 in terms of extracting metabolite information for identification and quantitation, ADAP-GC 2.0 requires appropriate specification of a number of parameters and has difficulty in extracting information on compounds that are in low concentration. To overcome these two limitations, ADAP-GC 3.0 was developed to improve both the robustness and sensitivity of compound detection. In this paper, we report how these goals were achieved and compare ADAP-GC 3.0 against three other software tools including ChromaTOF, AnalyzerPro, and AMDIS that are widely used in the metabolomics community.

  5. ADAP-GC 3.0: Improved Peak Detection and Deconvolution of Co-eluting Metabolites from GC/TOF-MS Data for Metabolomics Studies

    Science.gov (United States)

    Ni, Yan; Su, Mingming; Qiu, Yunping; Jia, Wei

    2017-01-01

    ADAP-GC is an automated computational pipeline for untargeted, GC-MS-based metabolomics studies. It takes raw mass spectrometry data as input and carries out a sequence of data processing steps including construction of extracted ion chromatograms, detection of chromatographic peak features, deconvolution of co-eluting compounds, and alignment of compounds across samples. Despite the increased accuracy from the original version to version 2.0 in terms of extracting metabolite information for identification and quantitation, ADAP-GC 2.0 requires appropriate specification of a number of parameters and has difficulty in extracting information of compounds that are in low concentration. To overcome these two limitations, ADAP-GC 3.0 was developed to improve both the robustness and sensitivity of compound detection. In this paper, we report how these goals were achieved and compare ADAP-GC 3.0 against three other software tools including ChromaTOF, AnalyzerPro, and AMDIS that are widely used in the metabolomics community. PMID:27461032

  6. Higher order capacity statistics of multi-hop transmission systems over Rayleigh fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-03-01

    In this paper, we present an exact analytical expression to evaluate the higher order statistics of the channel capacity for amplify and forward (AF) multihop transmission systems operating over Rayleigh fading channels. Furthermore, we present simple and efficient closed-form expression to the higher order moments of the channel capacity of dual hop transmission system with Rayleigh fading channels. In order to analyze the behavior of the higher order capacity statistics and investigate the usefulness of the mathematical analysis, some selected numerical and simulation results are presented. Our results are found to be in perfect agreement. © 2012 IEEE.

  7. Comment on the paper "Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution by G. Kitis, J.M. Gomez-Ros, Nuclear Instruments and Methods in Physics Research A 440, 2000, pp 224-231"

    Science.gov (United States)

    Kazakis, Nikolaos A.

    2018-01-01

    The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.

  8. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  9. Investigation of physico-chemical processes in lithium-ion batteries by deconvolution of electrochemical impedance spectra

    Science.gov (United States)

    Manikandan, Balasundaram; Ramar, Vishwanathan; Yap, Christopher; Balaya, Palani

    2017-09-01

    The individual physico-chemical processes in lithium-ion batteries namely solid-state diffusion and charge transfer polarization are difficult to be tracked by impedance spectroscopy due to simultaneous contributions from cathode and anode. A deeper understanding of various polarization processes in lithium-ion batteries is important to enhance storage performance and cycle life. In this context, the polarization processes occurring in cylindrical 18650 cells comprising different cathodes against graphite anode (LiNi0.2Mn0.2Co0.6O2vs. graphite; LiNi0.6Mn0.2Co0.2O2vs. graphite; LiNi0.8Co0.15Al0.05O2vs. graphite and LiFePO4vs. graphite) are investigated by deconvolution of impedance spectra across various states of charge. Further, cathodes and anodes are extracted from the investigated 18650-type cells and tested in half-cells against Li-metal as well as in symmetric cell configurations to understand the contribution of cathode and anode to the full cells of various battery chemistries studied. Except for the LiFePO4vs. graphite cell, the polarization resistance in graphite of other cells are found to be higher than those of the investigated cathodes, proving that the polarization in lithium-ion battery is largely influenced by the graphitic anode. Furthermore, the charge transfer polarization resistance encountered by the cathodes investigated in this work is found to be a strong function of the states of charge.

  10. Density based pruning for identification of differentially expressed genes from microarray data

    Directory of Open Access Journals (Sweden)

    Xu Jia

    2010-11-01

    Full Text Available Abstract Motivation Identification of differentially expressed genes from microarray datasets is one of the most important analyses for microarray data mining. Popular algorithms such as statistical t-test rank genes based on a single statistics. The false positive rate of these methods can be improved by considering other features of differentially expressed genes. Results We proposed a pattern recognition strategy for identifying differentially expressed genes. Genes are mapped to a two dimension feature space composed of average difference of gene expression and average expression levels. A density based pruning algorithm (DB Pruning is developed to screen out potential differentially expressed genes usually located in the sparse boundary region. Biases of popular algorithms for identifying differentially expressed genes are visually characterized. Experiments on 17 datasets from Gene Omnibus Database (GEO with experimentally verified differentially expressed genes showed that DB pruning can significantly improve the prediction accuracy of popular identification algorithms such as t-test, rank product, and fold change. Conclusions Density based pruning of non-differentially expressed genes is an effective method for enhancing statistical testing based algorithms for identifying differentially expressed genes. It improves t-test, rank product, and fold change by 11% to 50% in the numbers of identified true differentially expressed genes. The source code of DB pruning is freely available on our website http://mleg.cse.sc.edu/degprune

  11. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  12. Serial Expression Analysis: a web tool for the analysis of serial gene expression data

    Science.gov (United States)

    Nueda, Maria José; Carbonell, José; Medina, Ignacio; Dopazo, Joaquín; Conesa, Ana

    2010-01-01

    Serial transcriptomics experiments investigate the dynamics of gene expression changes associated with a quantitative variable such as time or dosage. The statistical analysis of these data implies the study of global and gene-specific expression trends, the identification of significant serial changes, the comparison of expression profiles and the assessment of transcriptional changes in terms of cellular processes. We have created the SEA (Serial Expression Analysis) suite to provide a complete web-based resource for the analysis of serial transcriptomics data. SEA offers five different algorithms based on univariate, multivariate and functional profiling strategies framed within a user-friendly interface and a project-oriented architecture to facilitate the analysis of serial gene expression data sets from different perspectives. SEA is available at sea.bioinfo.cipf.es. PMID:20525784

  13. A statistical model for horizontal mass flux of erodible soil

    International Nuclear Information System (INIS)

    Babiker, A.G.A.G.; Eltayeb, I.A.; Hassan, M.H.A.

    1986-11-01

    It is shown that the mass flux of erodible soil transported horizontally by a statistically distributed wind flow has a statistical distribution. Explicit expression for the probability density function, p.d.f., of the flux is derived for the case in which the wind speed has a Weibull distribution. The statistical distribution for a mass flux characterized by a generalized Bagnold formula is found to be Weibull for the case of zero threshold speed. Analytic and numerical values for the average horizontal mass flux of soil are obtained for various values of wind parameters, by evaluating the first moment of the flux density function. (author)

  14. Numerical evaluation of the statistical properties of a potential energy landscape

    International Nuclear Information System (INIS)

    Nave, E La; Sciortino, F; Tartaglia, P; Michele, C De; Mossa, S

    2003-01-01

    The techniques which allow the numerical evaluation of the statistical properties of the potential energy landscape for models of simple liquids are reviewed and critically discussed. Expressions for the liquid free energy and its vibrational and configurational components are reported. Finally, a possible model for the statistical properties of the landscape, which appears to describe correctly fragile liquids in the region where equilibrium simulations are feasible, is discussed

  15. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  16. Bayes and Networks

    NARCIS (Netherlands)

    Gao, F.

    2017-01-01

    The dissertation consists of research in three subjects in two themes—Bayes and networks: The first studies the posterior contraction rates for the Dirichlet-Laplace mixtures in a deconvolution setting (Chapter 1). The second subject regards the statistical inference in preferential attachment

  17. Expression of tumor necrosis factor-alpha and receptor I(P55in pterygium

    Directory of Open Access Journals (Sweden)

    Bing Wu

    2014-06-01

    Full Text Available AIM:To observe the expression of tumor necrosis factor- alpha(TNF-αand its receptor I(P55in different pterygium and discuss the role of TNF-α and receptor I(P55in pterygium.METHODS: Immunohistochemistical staining method(PVwas adopted to detect the expression of TNF-α and receptor I in pterygium(72 eyesand para-pterygium conjunctival tissue(30 eyes. The relationship between the expression and clinical-pathological parameters was also analyzed. RESULTS: The positive rates of TNF-α were 65.3%(47/72, 26.7%(8/30in pterygium and para-pterygium conjunctival tissue. The positive expression of TNF-α had statistic difference between the two groups(χ2=12.706, Pχ2=13.875, Pχ2=6.547, P=0.011. There had no statistically significance of the expression intensity between the two groups(F=1.288, P=0.393; the positive rate in advanced pterygium group was higher than quiescent pterygium group(χ2=4.082, P=0.043. The expression intensity had no statistically significance between the two groups(F=0.489, P=0.708. The positive rate of P55 in recurrent pterygium group was higher than primary pterygium group(χ2=9.907, P=0.002. There had no statistically significance of the two group's expression intensity(F=1.175, P=0.424; the positive rate in advanced pterygium group was higher than in quiescent pterygium group(χ2=11.140, P=0.001. The expression intensity had no statistically significance between the two groups(F=0.665, P=0.621. CONCLUSION:The expression of TNF-α and P55 are changing according to the development of clinical staging and onset. The expression of TNF-α and P55 may be related to clinical classification, staging and patient's working conditions of pterygium. There has no significant difference expression intensity of TNF-α and P55 in clinical staging and onset of pterygium.

  18. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  19. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  20. Telling the truth with statistics

    CERN Multimedia

    CERN. Geneva; CERN. Geneva. Audiovisual Unit

    2002-01-01

    This course of lectures will cover probability, distributions, fitting, errors and confidence levels, for practising High Energy Physicists who need to use Statistical techniques to express their results. Concentrating on these appropriate specialist techniques means that they can be covered in appropriate depth, while assuming only the knowledge and experience of a typical Particle Physicist. The different definitions of probability will be explained, and it will be appear why this basic subject is so controversial; there are several viewpoints and it is important to understand them all, rather than abusing the adherents of different beliefs. Distributions will be covered: the situations they arise in, their useful properties, and the amazing result of the Central Limit Theorem. Fitting a parametrisation to a set of data is one of the most widespread uses of statistics: these are lots of ways of doing this and these will be presented, with discussion of which is appropriate in different circumstances. This t...

  1. Student Understanding of Taylor Series Expansions in Statistical Mechanics

    Science.gov (United States)

    Smith, Trevor I.; Thompson, John R.; Mountcastle, Donald B.

    2013-01-01

    One goal of physics instruction is to have students learn to make physical meaning of specific mathematical expressions, concepts, and procedures in different physical settings. As part of research investigating student learning in statistical physics, we are developing curriculum materials that guide students through a derivation of the Boltzmann…

  2. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  3. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  4. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  5. Statistical distribution of the local purity in a large quantum system

    International Nuclear Information System (INIS)

    De Pasquale, A; Pascazio, S; Facchi, P; Giovannetti, V; Parisi, G; Scardicchio, A

    2012-01-01

    The local purity of large many-body quantum systems can be studied by following a statistical mechanical approach based on a random matrix model. Restricting the analysis to the case of global pure states, this method proved to be successful, and a full characterization of the statistical properties of the local purity was obtained by computing the partition function of the problem. Here we generalize these techniques to the case of global mixed states. In this context, by uniformly sampling the phase space of states with assigned global mixedness, we determine the exact expression of the first two moments of the local purity and a general expression for the moments of higher order. This generalizes previous results obtained for globally pure configurations. Furthermore, through the introduction of a partition function for a suitable canonical ensemble, we compute the approximate expression of the first moment of the marginal purity in the high-temperature regime. In the process, we establish a formal connection with the theory of quantum twirling maps that provides an alternative, possibly fruitful, way of performing the calculation. (paper)

  6. Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution.

    Directory of Open Access Journals (Sweden)

    J M Portegies

    Full Text Available We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI data via constrained spherical deconvolution (CSD. Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC, a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning.

  7. Effective viscosity of dispersions approached by a statistical continuum method

    NARCIS (Netherlands)

    Mellema, J.; Willemse, M.W.M.

    1983-01-01

    The problem of the determination of the effective viscosity of disperse systems (emulsions, suspensions) is considered. On the basis of the formal solution of the equations governing creeping flow in a statistically homogeneous dispersion, the effective viscosity is expressed in a series expansion

  8. Thermal and Electrical Conductivities of a Three-Dimensional Ideal Anyon Gas with Fractional Exclusion Statistics

    International Nuclear Information System (INIS)

    Qin Fang; Wen Wen; Chen Ji-Sheng

    2014-01-01

    The thermal and electrical transport properties of an ideal anyon gas within fractional exclusion statistics are studied. By solving the Boltzmann equation with the relaxation-time approximation, the analytical expressions for the thermal and electrical conductivities of a three-dimensional ideal anyon gas are given. The low-temperature expressions for the two conductivities are obtained by using the Sommerfeld expansion. It is found that the Wiedemann—Franz law should be modified by the higher-order temperature terms, which depend on the statistical parameter g for a charged anyon gas. Neglecting the higher-order terms of temperature, the Wiedemann—Franz law is respected, which gives the Lorenz number. The Lorenz number is a function of the statistical parameter g. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  9. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  10. Second-Order Statistics for Wave Propagation through Complex Optical Systems

    DEFF Research Database (Denmark)

    Yura, H.T.; Hanson, Steen Grüner

    1989-01-01

    Closed-form expressions are derived for various statistical functions that arise in optical propagation through arbitrary optical systems that can be characterized by a complex ABCD matrix in the presence of distributed random inhomogeneities along the optical path. Specifically, within the second......-order Rytov approximation, explicit general expressions are presented for the mutual coherence function, the log-amplitude and phase correlation functions, and the mean-square irradiance that are obtained in propagation through an arbitrary paraxial ABCD optical system containing Gaussian-shaped limiting...

  11. SRB states and nonequilibrium statistical mechanics close to equilibrium

    OpenAIRE

    Gallavotti, Giovannni; Ruelle, David

    1996-01-01

    Nonequilibrium statistical mechanics close to equilibrium is studied using SRB states and a formula for their derivatives with respect to parameters. We write general expressions for the thermodynamic fluxes (or currents) and the transport coefficients, generalizing previous results. In this framework we give a general proof of the Onsager reciprocity relations.

  12. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  13. DbAccess: Interactive Statistics and Graphics for Plasma Physics Databases

    International Nuclear Information System (INIS)

    Davis, W.; Mastrovito, D.

    2003-01-01

    DbAccess is an X-windows application, written in IDL(reg s ign), meeting many specialized statistical and graphical needs of NSTX [National Spherical Torus Experiment] plasma physicists, such as regression statistics and the analysis of variance. Flexible ''views'' and ''joins,'' which include options for complex SQL expressions, facilitate mixing data from different database tables. General Atomics Plot Objects add extensive graphical and interactive capabilities. An example is included for plasma confinement-time scaling analysis using a multiple linear regression least-squares power fit

  14. Energy statistics and balances of non-OECD countries 1991-1992

    International Nuclear Information System (INIS)

    1994-01-01

    Contains a compilation of energy production and consumption statistics for 85 non-OECD countries and regions, including developing countries, Central and Eastern European countries and the former Soviet Union. Data are expressed in original units and in common units for coal, oil, gas, electricity and heat. Historical tables for both individual countries and regions summarize data on coal, gas and electricity production and consumption since 1971. Similar data for OECD are available in the IEA publications Energy Statistics and Energy Balances of OECD Countries

  15. Pattern statistics on Markov chains and sensitivity to parameter estimation

    Directory of Open Access Journals (Sweden)

    Nuel Grégory

    2006-10-01

    Full Text Available Abstract Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,.... Results: In the particular case where pattern statistics (overlap counting only computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  16. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    Science.gov (United States)

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  17. Statistical equilibrium and symplectic geometry in general relativity

    International Nuclear Information System (INIS)

    Iglesias, P.

    1981-09-01

    A geometrical construction is given of the statistical equilibrium states of a system of particles in the gravitational field in general relativity. By a method of localization variables, the expression of thermodynamic values is given and the compatibility of this description is shown with a macroscopic model of a relativistic continuous medium for a given value of the free-energy function [fr

  18. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  19. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  20. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    Science.gov (United States)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  1. Statistical windows in angular momentum space: the basis of heavy-ion compound cross section

    International Nuclear Information System (INIS)

    Hussein, M.S.; Toledo, A.S. de.

    1981-04-01

    The concept of statistical windows in angular momentum space is introduced and utilized to develop a practical model for the heavy-ion compound cross section. Closed expressions for the average differential cross-section are derived and compared with Hauser-Feshbach calculations. The effects of the statistical windows are isolated and discussed. (Author) [pt

  2. New Theoretical Expressions for the Five Adsorption Type Isotherms ...

    African Journals Online (AJOL)

    New Theoretical Expressions for the Five Adsorption Type Isotherms Classified by Bet Basing on Statistical Physics Treatment. ... that we have proposed, basing on statistical physics treatment, are rather powerful to better understand and interpret the various five physical adsorption Type isotherms at a microscopic level.

  3. Association of p53 protein expression with clinical outcome in advanced supraglottic cancer

    International Nuclear Information System (INIS)

    Kang, Jin Oh; Hong, Seong Eon

    1998-01-01

    To determine the incidence and prognostic effect of p53 expression in patients with advanced supraglottic cancer. Twenty-one cases of total 48 advanced supraglottic cancer patients who received postoperative adjuvant radiation therapy were evaluated by immunohistochemical staining employing p53 monoclonal antibody. Three out of six stage III patients and four out of fifteen stage IV patients showed p53 expression without statistically significant difference (p=0.608). Five year survival rates are 93% in p53 negative, 86% in p53 positive patients and there was no significant difference(p=0.776). p53 expression does not show statistically significant correlation with primary tumor status(p=0.877), lymph node status(p=0.874) and age(p=0.64). There was no statistically significant correlation between traditionally known risk factors and p53 expression

  4. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  5. Equivalent Gene Expression Profiles between Glatopa™ and Copaxone®.

    Directory of Open Access Journals (Sweden)

    Josephine S D'Alessandro

    Full Text Available Glatopa™ is a generic glatiramer acetate recently approved for the treatment of patients with relapsing forms of multiple sclerosis. Gene expression profiling was performed as a means to evaluate equivalence of Glatopa and Copaxone®. Microarray analysis containing 39,429 unique probes across the entire genome was performed in murine glatiramer acetate--responsive Th2-polarized T cells, a test system highly relevant to the biology of glatiramer acetate. A closely related but nonequivalent glatiramoid molecule was used as a control to establish assay sensitivity. Multiple probe-level (Student's t-test and sample-level (principal component analysis, multidimensional scaling, and hierarchical clustering statistical analyses were utilized to look for differences in gene expression induced by the test articles. The analyses were conducted across all genes measured, as well as across a subset of genes that were shown to be modulated by Copaxone. The following observations were made across multiple statistical analyses: the expression of numerous genes was significantly changed by treatment with Copaxone when compared against media-only control; gene expression profiles induced by Copaxone and Glatopa were not significantly different; and gene expression profiles induced by Copaxone and the nonequivalent glatiramoid were significantly different, underscoring the sensitivity of the test system and the multiple analysis methods. Comparative analysis was also performed on sets of transcripts relevant to T-cell biology and antigen presentation, among others that are known to be modulated by glatiramer acetate. No statistically significant differences were observed between Copaxone and Glatopa in the expression levels (magnitude and direction of these glatiramer acetate-regulated genes. In conclusion, multiple methods consistently supported equivalent gene expression profiles between Copaxone and Glatopa.

  6. Cytokeratin 19 Expression Patterns of Dentigerous Cysts and Odontogenic Keratocysts

    Science.gov (United States)

    Kamath, KP; Vidya, M

    2015-01-01

    Background: Although numerous investigators have studied the pattern of keratin expression in different odontogenic cysts, the results have been variable. Aim: The present study was conducted to determine the pattern of expression of cytokeratin 19 (CK 19) in the epithelial lining of odontogenic keratocysts and dentigerous cysts. Materials and Methods: The epithelial layers showing expression of the epithelial marker CK 19 was determined by immunohistochemical methods in 15 tissue specimens each of histopathologically confirmed cases of dentigerous cysts and odontogenic keratocysts. Statistical analysis was done to compare the CK 19 expression between dentigerous cyst and odontogenic keratocyst using the Chi-square test. P keratocysts, 40% (6/15) of the specimens were negative for CK 19, 40% (6/15) of the specimens showed expression only in a single layer of the epithelium, and 20% (3/15) of the specimens showed expression in more than one layer, but not the entire thickness of the epithelium. The observed differences in CK 19 expression by the two lesions were statistically significant (P < 0.01). Conclusion: The differences in CK 19 expression by these cysts may be utilized as a diagnostic tool in differentiating between these two lesions. PMID:25861531

  7. Single-cell mRNA transfection studies: delivery, kinetics and statistics by numbers.

    Science.gov (United States)

    Leonhardt, Carolin; Schwake, Gerlinde; Stögbauer, Tobias R; Rappl, Susanne; Kuhr, Jan-Timm; Ligon, Thomas S; Rädler, Joachim O

    2014-05-01

    In artificial gene delivery, messenger RNA (mRNA) is an attractive alternative to plasmid DNA (pDNA) since it does not require transfer into the cell nucleus. Here we show that, unlike for pDNA transfection, the delivery statistics and dynamics of mRNA-mediated expression are generic and predictable in terms of mathematical modeling. We measured the single-cell expression time-courses and levels of enhanced green fluorescent protein (eGFP) using time-lapse microscopy and flow cytometry (FC). The single-cell analysis provides direct access to the distribution of onset times, life times and expression rates of mRNA and eGFP. We introduce a two-step stochastic delivery model that reproduces the number distribution of successfully delivered and translated mRNA molecules and thereby the dose-response relation. Our results establish a statistical framework for mRNA transfection and as such should advance the development of RNA carriers and small interfering/micro RNA-based drugs. This team of authors established a statistical framework for mRNA transfection by using a two-step stochastic delivery model that reproduces the number distribution of successfully delivered and translated mRNA molecules and thereby their dose-response relation. This study establishes a nice connection between theory and experimental planning and will aid the cellular delivery of mRNA molecules. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Statistical mechanical perturbation theory of solid-vapor interfacial free energy

    NARCIS (Netherlands)

    Kalikmanov, Vitalij Iosifovitsj; Hagmeijer, Rob; Venner, Cornelis H.

    2017-01-01

    The solid–vapor interfacial free energy γsv plays an important role in a number of physical phenomena, such as adsorption, wetting, and adhesion. We propose a closed form expression for the orientation averaged value of this quantity using a statistical mechanical perturbation approach developed in

  9. Statistical Mechanical Perturbation Theory of Solid−Vapor Interfacial Free Energy

    NARCIS (Netherlands)

    Kalikmanov, V.I.; Hagmeijer, R.; Venner, C.H.

    2017-01-01

    The solid–vapor interfacial free energy γsv plays an important role in a number of physical phenomena, such as adsorption, wetting, and adhesion. We propose a closed form expression for the orientation averaged value of this quantity using a statistical mechanical perturbation approach developed in

  10. Consistent dynamical and statistical description of fission and comparison

    Energy Technology Data Exchange (ETDEWEB)

    Shunuan, Wang [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    The research survey of consistent dynamical and statistical description of fission is briefly introduced. The channel theory of fission with diffusive dynamics based on Bohr channel theory of fission and Fokker-Planck equation and Kramers-modified Bohr-Wheeler expression according to Strutinsky method given by P.Frobrich et al. are compared and analyzed. (2 figs.).

  11. Evidence for radical anion formation during liquid secondary ion mass spectrometry analysis of oligonucleotides and synthetic oligomeric analogues: a deconvolution algorithm for molecular ion region clusters.

    Science.gov (United States)

    Laramée, J A; Arbogast, B; Deinzer, M L

    1989-10-01

    It is shown that one-electron reduction is a common process that occurs in negative ion liquid secondary ion mass spectrometry (LSIMS) of oligonucleotides and synthetic oligonucleosides and that this process is in competition with proton loss. Deconvolution of the molecular anion cluster reveals contributions from (M-2H).-, (M-H)-, M.-, and (M + H)-. A model based on these ionic species gives excellent agreement with the experimental data. A correlation between the concentration of species arising via one-electron reduction [M.- and (M + H)-] and the electron affinity of the matrix has been demonstrated. The relative intensity of M.- is mass-dependent; this is rationalized on the basis of base-stacking. Base sequence ion formation is theorized to arise from M.- radical anion among other possible pathways.

  12. Isotopic safeguards statistics

    International Nuclear Information System (INIS)

    Timmerman, C.L.; Stewart, K.B.

    1978-06-01

    The methods and results of our statistical analysis of isotopic data using isotopic safeguards techniques are illustrated using example data from the Yankee Rowe reactor. The statistical methods used in this analysis are the paired comparison and the regression analyses. A paired comparison results when a sample from a batch is analyzed by two different laboratories. Paired comparison techniques can be used with regression analysis to detect and identify outlier batches. The second analysis tool, linear regression, involves comparing various regression approaches. These approaches use two basic types of models: the intercept model (y = α + βx) and the initial point model [y - y 0 = β(x - x 0 )]. The intercept model fits strictly the exposure or burnup values of isotopic functions, while the initial point model utilizes the exposure values plus the initial or fabricator's data values in the regression analysis. Two fitting methods are applied to each of these models. These methods are: (1) the usual least squares fitting approach where x is measured without error, and (2) Deming's approach which uses the variance estimates obtained from the paired comparison results and considers x and y are both measured with error. The Yankee Rowe data were first measured by Nuclear Fuel Services (NFS) and remeasured by Nuclear Audit and Testing Company (NATCO). The ratio of Pu/U versus 235 D (in which 235 D is the amount of depleted 235 U expressed in weight percent) using actual numbers is the isotopic function illustrated. Statistical results using the Yankee Rowe data indicates the attractiveness of Deming's regression model over the usual approach by simple comparison of the given regression variances with the random variance from the paired comparison results

  13. Antagonism pattern detection between microRNA and target expression in Ewing's sarcoma.

    Directory of Open Access Journals (Sweden)

    Loredana Martignetti

    Full Text Available MicroRNAs (miRNAs have emerged as fundamental regulators that silence gene expression at the post-transcriptional and translational levels. The identification of their targets is a major challenge to elucidate the regulated biological processes. The overall effect of miRNA is reflected on target mRNA expression, suggesting the design of new investigative methods based on high-throughput experimental data such as miRNA and transcriptome profiles. We propose a novel statistical measure of non-linear dependence between miRNA and mRNA expression, in order to infer miRNA-target interactions. This approach, which we name antagonism pattern detection, is based on the statistical recognition of a triangular-shaped pattern in miRNA-target expression profiles. This pattern is observed in miRNA-target expression measurements since their simultaneously elevated expression is statistically under-represented in the case of miRNA silencing effect. The proposed method enables miRNA target prediction to strongly rely on cellular context and physiological conditions reflected by expression data. The procedure has been assessed on synthetic datasets and tested on a set of real positive controls. Then it has been applied to analyze expression data from Ewing's sarcoma patients. The antagonism relationship is evaluated as a good indicator of real miRNA-target biological interaction. The predicted targets are consistently enriched for miRNA binding site motifs in their 3'UTR. Moreover, we reveal sets of predicted targets for each miRNA sharing important biological function. The procedure allows us to infer crucial miRNA regulators and their potential targets in Ewing's sarcoma disease. It can be considered as a valid statistical approach to discover new insights in the miRNA regulatory mechanisms.

  14. Immunohistochemical expression of matrix metalloproteinase 13 in chronic periodontitis.

    Science.gov (United States)

    Nagasupriya, Alapati; Rao, Donimukkala Bheemalingeswara; Ravikanth, Manyam; Kumar, Nalabolu Govind; Ramachandran, Cinnamanoor Rajmani; Saraswathi, Thillai Rajashekaran

    2014-01-01

    The extracellular matrix is a complex integrated system responsible for the physiologic properties of connective tissue. Collagen is the major extracellular component that is altered in pathologic conditions, mainly periodontitis. The destruction involves proteolytic enzymes, primarily matrix metalloproteinases (MMPs), which play a key role in mediating and regulating the connective tissue destruction in periodontitis. The study group included 40 patients with clinically diagnosed chronic periodontitis. The control group included 20 patients with clinically normal gingiva covering impacted third molars undergoing extraction or in areas where crown-lengthening procedures were performed. MMP-13 expression was demonstrated using immunohistochemistry in all the gingival biopsies, and the data were analyzed statistically. MMP-13 expression was observed more in chronic periodontitis when compared with normal gingiva. MMP-13 expression was expressed by fibroblasts, lymphocytes, macrophages, plasma cells, and basal cells of the sulcular epithelium. Comparative evaluation of all the clinical and histologic parameters with MMP-13 expression showed high statistical significance with Spearman correlation coefficient. Elevated levels of MMP-13 may play a role in the pathogenesis of chronic periodontitis. There is a direct correlation of increased expression of MMP-13 with various clinical and histologic parameters in disease severity.

  15. A general contact mechanical formulation of multilayered structures and its application to deconvolute thickness/mechanical properties of glue used in surface force apparatus.

    Science.gov (United States)

    Math, Souvik; Horn, Roger; Jayaram, Vikram; Biswas, Sanjay Kumar

    2007-04-15

    Currently data obtained from surface force apparatus experiments are convoluted with the mechanical response of glue of unknown thickness, used to bond mica sheets to the substrates. This paper describes a formulation to precisely deconvolute out the forces between the mica sheets by determining the thickness of glue, knowing the mechanical properties of the glue. The formulation consists of a general solution based on the noniterative Hankel transform of the Laplace equation. The generality is achieved by treating all the layers except the one in contact as an effective lumped system consisting of a set of springs in series, where each spring represents a layer. The solution is validated by nanoindentation of trilayer systems consisting of layers with widely diverse mechanical properties, some differing from each other by three orders of magnitude. SFA experiments are done with carefully metered slabs of glue. The proposed method is validated by comparing the actual glue thicknesses with those determined using the present analysis.

  16. Application of mathematical removal of positron range blurring in positron emission tomography

    International Nuclear Information System (INIS)

    Haber, S.F.; Derenzo, S.E.; Uber, D.

    1990-01-01

    The range of positrons in tissue is an important limitation to the ultimate spatial resolution achievable in positron emission tomography. In this work the authors have applied a Fourier deconvolution technique to remove range blurring in images taken by the Donner 600-crystal positron tomograph. Using phantom data, the authors have found significant improvement in the image quality and the FWHM for both 68 Ga and 82 Rb. These were successfully corrected so that the images and FWHM almost matched those of 18 F which has negligible positron range. However, statistical noise was increased by the deconvolution process and it was not practical to recover the full spatial resolution of the tomograph

  17. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  18. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  19. von Neumann entropy associated with the haldane exclusion statistics

    International Nuclear Information System (INIS)

    Rajagopal, A.K.

    1995-01-01

    We obtain the von Neumann entropy per state of the Haldane exclusion statistics with parameter g in terms of the mean occupation number bar n{wlnw-(1+w)ln(1+w)}, where w=(1-bar n). This reduces correctly to the well known expressions in the limiting cases of Bose (g=0) and Fermi (g=1) statistics. We have derived the second and third order fluctuations in the occupation numbers for arbitrary g. An elegant general duality relationship between the w factor associated with the particle and that associated with the hole at the reciprocal g is deduced along with the attendant relationship between the two respective entropies

  20. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  1. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data

    Directory of Open Access Journals (Sweden)

    Tintle Nathan L

    2012-08-01

    Full Text Available Abstract Background Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO and Kyoto Encyclopedia of Genes and Genomes (KEGG are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. Results We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Conclusions Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  2. Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points

    Science.gov (United States)

    Ekol, George

    2015-01-01

    This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…

  3. Statistical inference for remote sensing-based estimates of net deforestation

    Science.gov (United States)

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  4. Orthogonality catastrophe and fractional exclusion statistics

    Science.gov (United States)

    Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.

    2018-02-01

    We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  5. Novel asymptotic results on the high-order statistics of the channel capacity over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-06-01

    The exact analysis of the higher-order statistics of the channel capacity (i.e., higher-order ergodic capacity) often leads to complicated expressions involving advanced special functions. In this paper, we provide a generic framework for the computation of the higher-order statistics of the channel capacity over generalized fading channels. As such, this novel framework for the higher-order statistics results in simple, closed-form expressions which are shown to be asymptotically tight bounds in the high signal-to-noise ratio (SNR) regime of a variety of fading environment. In addition, it reveals the existence of differences (i.e., constant capacity gaps in log-domain) among different fading environments. By asymptotically tight bound we mean that the high SNR limit of the difference between the actual higher-order statistics of the channel capacity and its asymptotic bound (i.e., lower bound) tends to zero. The mathematical formalism is illustrated with some selected numerical examples that validate the correctness of our newly derived results. © 2012 IEEE.

  6. Bayesian Statistics: Concepts and Applications in Animal Breeding – A Review

    Directory of Open Access Journals (Sweden)

    Lsxmikant-Sambhaji Kokate

    2011-07-01

    Full Text Available Statistics uses two major approaches- conventional (or frequentist and Bayesian approach. Bayesian approach provides a complete paradigm for both statistical inference and decision making under uncertainty. Bayesian methods solve many of the difficulties faced by conventional statistical methods, and extend the applicability of statistical methods. It exploits the use of probabilistic models to formulate scientific problems. To use Bayesian statistics, there is computational difficulty and secondly, Bayesian methods require specifying prior probability distributions. Markov Chain Monte-Carlo (MCMC methods were applied to overcome the computational difficulty, and interest in Bayesian methods was renewed. In Bayesian statistics, Bayesian structural equation model (SEM is used. It provides a powerful and flexible approach for studying quantitative traits for wide spectrum problems and thus it has no operational difficulties, with the exception of some complex cases. In this method, the problems are solved at ease, and the statisticians feel it comfortable with the particular way of expressing the results and employing the software available to analyze a large variety of problems.

  7. Tropical geometry of statistical models.

    Science.gov (United States)

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.

  8. Effect of crack orientation statistics on effective stiffness of mircocracked solid

    DEFF Research Database (Denmark)

    Kushch, V.I.; Sevostianov, I.; Mishnaevsky, Leon

    2009-01-01

    provides reducing the boundary-value problem to an ordinary, well-posed set of linear algebraic equations. The exact finite form expression of the effective stiffness tensor has been obtained by analytical averaging the strain and stress fields. The convergence study has been performed: the statistically...

  9. A Deconvolution Protocol for ChIP-Seq Reveals Analogous Enhancer Structures on the Mouse and Human Ribosomal RNA Genes

    Directory of Open Access Journals (Sweden)

    Jean-Clement Mars

    2018-01-01

    Full Text Available The combination of Chromatin Immunoprecipitation and Massively Parallel Sequencing, or ChIP-Seq, has greatly advanced our genome-wide understanding of chromatin and enhancer structures. However, its resolution at any given genetic locus is limited by several factors. In applying ChIP-Seq to the study of the ribosomal RNA genes, we found that a major limitation to resolution was imposed by the underlying variability in sequence coverage that very often dominates the protein–DNA interaction profiles. Here, we describe a simple numerical deconvolution approach that, in large part, corrects for this variability, and significantly improves both the resolution and quantitation of protein–DNA interaction maps deduced from ChIP-Seq data. This approach has allowed us to determine the in vivo organization of the RNA polymerase I preinitiation complexes that form at the promoters and enhancers of the mouse (Mus musculus and human (Homo sapiens ribosomal RNA genes, and to reveal a phased binding of the HMG-box factor UBF across the rDNA. The data identify and map a “Spacer Promoter” and associated stalled polymerase in the intergenic spacer of the human ribosomal RNA genes, and reveal a very similar enhancer structure to that found in rodents and lower vertebrates.

  10. Analysis of the stability of the traps in LiF: Mg, Cu, P by deconvolution of it Tl curve

    International Nuclear Information System (INIS)

    Gonzalez, P.R.; Azorin, J.; Furetta, C.; Lopez, J.

    2004-01-01

    The results of the study of the stability of the traps are presented in Tl dosemeters of LiF: Mg,Cu,P + Ptfe, developed in the ININ, taking like reference to the commercial dosemeter GR200A of Chinese factory. The readings taken Tl the same day of the irradiation they presented four peaks whose energy, determined by deconvolution were; 1.30 ± 0.01 eV, 1.50 ± 0.01 eV, 1.70 ± 0.01 eV and 2.58± 0.02 eV, for LiF: Mg,Cu,P + Ptfe, while for GR200A the energies were: 1.33 ± 0.11 eV, 1.58 ± 0.11 eV, 1.73 ± 0.11 eV and 2.60 ± 0.03 eV. The energy of the peaks 3 and 4 that remained visible during six months of study it was: 1.38 ± 0.01 eV and 2.65 ± 0.01 eV, for LiF: Mg,Cu,P + Ptfe respectively, in the same order for GR200A, the energies were: 1.51 ± 0.02 eV and 2.64 ± 0.03 eV. (Author)

  11. Real-time PCR gene expression profiling

    Czech Academy of Sciences Publication Activity Database

    Kubista, Mikael; Sjögreen, B.; Forootan, A.; Šindelka, Radek; Jonák, Jiří; Andrade, J.M.

    2007-01-01

    Roč. 1, - (2007), s. 56-60 ISSN 1360-8606 R&D Projects: GA AV ČR KJB500520601 Institutional research plan: CEZ:AV0Z50520514 Keywords : real - time PCR, * expression profiling * statistical analysis Subject RIV: EB - Genetics ; Molecular Biology

  12. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    Science.gov (United States)

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    Science.gov (United States)

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; preliability for all metrics in callosal fibers (0.85-1.00; preliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Statistical properties of superimposed stationary spike trains.

    Science.gov (United States)

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

  15. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  16. Determination of selected endogenous anabolic androgenic steroids and ratios in urine by ultra high performance liquid chromatography tandem mass spectrometry and isotope pattern deconvolution.

    Science.gov (United States)

    Pitarch-Motellón, J; Sancho, J V; Ibáñez, M; Pozo, O; Roig-Navarro, A F

    2017-09-15

    An isotope dilution mass spectrometry (IDMS) method for the determination of selected endogenous anabolic androgenic steroids (EAAS) in urine by UHPLC-MS/MS has been developed using the isotope pattern deconvolution (IPD) mathematical tool. The method has been successfully validated for testosterone, epitestosterone, androsterone and etiocholanolone, employing their respective deuterated analogs using two certified reference materials (CRM). Accuracy was evaluated as recovery of the certified values and ranged from 75% to 108%. Precision was assessed in intraday (n=5) and interday (n=4) experiments, with RSDs below 5% and 10% respectively. The method was also found suitable for real urine samples, with limits of detection (LOD) and quantification (LOQ) below the normal urinary levels. The developed method meets the requirements established by the World Anti-Doping Agency for the selected steroids for Athlete Biological Passport (ABP) measurements, except in the case of androsterone, which is currently under study. Copyright © 2017. Published by Elsevier B.V.

  17. New applications of statistical tools in plant pathology.

    Science.gov (United States)

    Garrett, K A; Madden, L V; Hughes, G; Pfender, W F

    2004-09-01

    ABSTRACT The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.

  18. Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)

    International Nuclear Information System (INIS)

    2003-01-01

    This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas

  19. Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)

    International Nuclear Information System (INIS)

    2004-01-01

    This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas

  20. Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)

    International Nuclear Information System (INIS)

    2002-01-01

    This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas