WorldWideScience

Sample records for statistical expression deconvolution

  1. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  2. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  3. Gene Expression Deconvolution for Uncovering Molecular Signatures in Response to Therapy in Juvenile Idiopathic Arthritis.

    Directory of Open Access Journals (Sweden)

    Ang Cui

    Full Text Available Gene expression-based signatures help identify pathways relevant to diseases and treatments, but are challenging to construct when there is a diversity of disease mechanisms and treatments in patients with complex diseases. To overcome this challenge, we present a new application of an in silico gene expression deconvolution method, ISOpure-S1, and apply it to identify a common gene expression signature corresponding to response to treatment in 33 juvenile idiopathic arthritis (JIA patients. Using pre- and post-treatment gene expression profiles only, we found a gene expression signature that significantly correlated with a reduction in the number of joints with active arthritis, a measure of clinical outcome (Spearman rho = 0.44, p = 0.040, Bonferroni correction. This signature may be associated with a decrease in T-cells, monocytes, neutrophils and platelets. The products of most differentially expressed genes include known biomarkers for JIA such as major histocompatibility complexes and interleukins, as well as novel biomarkers including α-defensins. This method is readily applicable to expression datasets of other complex diseases to uncover shared mechanistic patterns in heterogeneous samples.

  4. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  5. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  6. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  7. Utilization of the statistics techniques for the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger electronic spectra's deconvolutions

    International Nuclear Information System (INIS)

    Puentes, M.B.

    1987-01-01

    For the analysis of the XPS (X-ray photoelectron spectroscopy) and Auger spectra, it is important to performe the peaks' separation and estimate its intensity. For this purpose, a methodology was implemented, including: a spectrum's filter; b) substraction of the base line (or inelastic background); c) deconvolution (separation of the distribution that integrates the spectrum) and d) error of calculation of the mean estimation, comprising adjustment quality tests. A software (FORTRAN IV plus) that permits to use the methodology proposed from the experimental spectra was implemented. The quality of the methodology was tested with simulated spectra. (Author) [es

  8. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  9. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  10. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  11. Some statistical properties of gene expression clustering for array data

    DEFF Research Database (Denmark)

    Abreu, G C G; Pinheiro, A; Drummond, R D

    2010-01-01

    DNA array data without a corresponding statistical error measure. We propose an easy-to-implement and simple-to-use technique that uses bootstrap re-sampling to evaluate the statistical error of the nodes provided by SOM-based clustering. Comparisons between SOM and parametric clustering are presented...... for simulated as well as for two real data sets. We also implement a bootstrap-based pre-processing procedure for SOM, that improves the false discovery ratio of differentially expressed genes. Code in Matlab is freely available, as well as some supplementary material, at the following address: https...

  12. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  13. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  14. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  15. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  16. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  17. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  18. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  19. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  20. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  1. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  2. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  3. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  4. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  5. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  6. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  7. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  8. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  9. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  10. A new expression of the probability distribution in Incomplete Statistics and fundamental thermodynamic relations

    International Nuclear Information System (INIS)

    Huang Zhifu; Lin Bihong; ChenJincan

    2009-01-01

    In order to overcome the limitations of the original expression of the probability distribution appearing in literature of Incomplete Statistics, a new expression of the probability distribution is derived, where the Lagrange multiplier β introduced here is proved to be identical with that introduced in the second and third choices for the internal energy constraint in Tsallis' statistics and to be just equal to the physical inverse temperature. It is expounded that the probability distribution described by the new expression is invariant through uniform translation of the energy spectrum. Moreover, several fundamental thermodynamic relations are given and the relationship between the new and the original expressions of the probability distribution is discussed.

  11. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  12. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  13. Further statistical analysis for genome-wide expression evolution in primate brain/liver/fibroblast tissue

    Directory of Open Access Journals (Sweden)

    Gu Jianying

    2004-05-01

    Full Text Available Abstract In spite of only a 1-2 per cent genomic DNA sequence difference, humans and chimpanzees differ considerably in behaviour and cognition. Affymetrix microarray technology provides a novel approach to addressing a long-term debate on whether the difference between humans and chimpanzees results from the alteration of gene expressions. Here, we used several statistical methods (distance method, two-sample t-tests, regularised t-tests, ANOVA and bootstrapping to detect the differential expression pattern between humans and great apes. Our analysis shows that the pattern we observed before is robust against various statistical methods; that is, the pronounced expression changes occurred on the human lineage after the split from chimpanzees, and that the dramatic brain expression alterations in humans may be mainly driven by a set of genes with increased expression (up-regulated rather than decreased expression (down-regulated.

  14. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  15. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  16. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  17. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  18. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  19. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  20. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  1. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  2. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  3. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  4. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  5. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  6. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  7. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  8. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  9. Comparison of small n statistical tests of differential expression applied to microarrays

    Directory of Open Access Journals (Sweden)

    Lee Anna Y

    2009-02-01

    Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.

  10. GeneTrailExpress: a web-based pipeline for the statistical evaluation of microarray experiments

    Directory of Open Access Journals (Sweden)

    Kohlbacher Oliver

    2008-12-01

    Full Text Available Abstract Background High-throughput methods that allow for measuring the expression of thousands of genes or proteins simultaneously have opened new avenues for studying biochemical processes. While the noisiness of the data necessitates an extensive pre-processing of the raw data, the high dimensionality requires effective statistical analysis methods that facilitate the identification of crucial biological features and relations. For these reasons, the evaluation and interpretation of expression data is a complex, labor-intensive multi-step process. While a variety of tools for normalizing, analysing, or visualizing expression profiles has been developed in the last years, most of these tools offer only functionality for accomplishing certain steps of the evaluation pipeline. Results Here, we present a web-based toolbox that provides rich functionality for all steps of the evaluation pipeline. Our tool GeneTrailExpress offers besides standard normalization procedures powerful statistical analysis methods for studying a large variety of biological categories and pathways. Furthermore, an integrated graph visualization tool, BiNA, enables the user to draw the relevant biological pathways applying cutting-edge graph-layout algorithms. Conclusion Our gene expression toolbox with its interactive visualization of the pathways and the expression values projected onto the nodes will simplify the analysis and interpretation of biochemical pathways considerably.

  11. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  12. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  13. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  14. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  15. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  16. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Statistical modelling and deconvolution of yield meter data

    DEFF Research Database (Denmark)

    Tøgersen, Frede Aakmann; Waagepetersen, Rasmus Plenge

    2004-01-01

    and an impulse response function. This results in an unusual spatial covariance structure (depending on the driving pattern of the combine harverster) for the yield monitoring system data. Parameters of the impulse response function and the spatial covariance function of the yield are estimated using maximum...

  18. Statistical modelling and deconvolution of yield meter data

    DEFF Research Database (Denmark)

    Tøgersen, Frede Aakmann; Waagepetersen, Rasmus Plenge

    Data for yield maps can be obtained from modern combine harvesters equipped with a differential global positioning system and a yield monitoring system. Due to delay and smoothing effects in the combine harvester the recorded yield data for a location represents a shifted weighted average of yiel...

  19. Statistical analysis of grapevine mortality associated with esca or Eutypa dieback foliar expression

    Directory of Open Access Journals (Sweden)

    Lucia GUERIN-DUBRANA

    2013-09-01

    Full Text Available Esca and Eutypa dieback are two major wood diseases of grapevine in France. Their widespread distribution in vineyards leads to vine decline and to a loss in productivity. However, little is known either about the temporal dynamics of these diseases at plant level, and equally, the relationships between foliar expression of the diseases and vine death is relatively unknown too.  To investigate this last question, the vines of six vineyards cv. Cabernet Sauvignon in the Bordeaux region were surveyed, by recording foliar symptoms, dead arms and dead plants from 2004 to 2010. In 2008, 2009 and 2010, approximately five percent of the asymptomatic vines died but the percentage of dead vines which had previously expressed esca foliar symptoms was higher, and varied between vineyards. A logistic regression model was used to determine the previous years of symptomatic expression associated with vine mortality. The mortality of esca is always associated with the foliar symptom expression of the year preceding vine death. One or two other earlier years of expression frequently represented additional risk factors. The Eutypa dieback symptom was also a risk factor of death, superior or equal to that of esca. The study of the internal necroses of vines expressing esca or Eutypa dieback is discussed in the light of these statistical results.

  20. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  1. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  2. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  3. Analysis of MultiWord Expression Translation Errors in Statistical Machine Translation

    DEFF Research Database (Denmark)

    Klyueva, Natalia; Liyanapathirana, Jeevanthi

    2015-01-01

    In this paper, we analyse the usage of multiword expressions (MWE) in Statistical Machine Translation (SMT). We exploit the Moses SMT toolkit to train models for French-English and Czech-Russian language pairs. For each language pair, two models were built: a baseline model without additional MWE...... data and the model enhanced with information on MWE. For the French-English pair, we tried three methods of introducing the MWE data. For Czech-Russian pair, we used just one method – adding automatically extracted data as a parallel corpus....

  4. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  5. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  6. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  7. Network statistics of genetically-driven gene co-expression modules in mouse crosses

    Directory of Open Access Journals (Sweden)

    Marie-Pier eScott-Boyer

    2013-12-01

    Full Text Available In biology, networks are used in different contexts as ways to represent relationships between entities, such as for instance interactions between genes, proteins or metabolites. Despite progress in the analysis of such networks and their potential to better understand the collective impact of genes on complex traits, one remaining challenge is to establish the biologic validity of gene co-expression networks and to determine what governs their organization. We used WGCNA to construct and analyze seven gene expression datasets from several tissues of mouse recombinant inbred strains (RIS. For six out of the 7 networks, we found that linkage to module QTLs (mQTLs could be established for 29.3% of gene co-expression modules detected in the several mouse RIS. For about 74.6% of such genetically-linked modules, the mQTL was on the same chromosome as the one contributing most genes to the module, with genes originating from that chromosome showing higher connectivity than other genes in the modules. Such modules (that we considered as genetically-driven had network statistic properties (density, centralization and heterogeneity that set them apart from other modules in the network. Altogether, a sizeable portion of gene co-expression modules detected in mouse RIS panels had genetic determinants as their main organizing principle. In addition to providing a biologic interpretation validation for these modules, these genetic determinants imparted on them particular properties that set them apart from other modules in the network, to the point that they can be predicted to a large extent on the basis of their network statistics.

  8. Microarray data and gene expression statistics for Saccharomyces cerevisiae exposed to simulated asbestos mine drainage

    Directory of Open Access Journals (Sweden)

    Heather E. Driscoll

    2017-08-01

    Full Text Available Here we describe microarray expression data (raw and normalized, experimental metadata, and gene-level data with expression statistics from Saccharomyces cerevisiae exposed to simulated asbestos mine drainage from the Vermont Asbestos Group (VAG Mine on Belvidere Mountain in northern Vermont, USA. For nearly 100 years (between the late 1890s and 1993, chrysotile asbestos fibers were extracted from serpentinized ultramafic rock at the VAG Mine for use in construction and manufacturing industries. Studies have shown that water courses and streambeds nearby have become contaminated with asbestos mine tailings runoff, including elevated levels of magnesium, nickel, chromium, and arsenic, elevated pH, and chrysotile asbestos-laden mine tailings, due to leaching and gradual erosion of massive piles of mine waste covering approximately 9 km2. We exposed yeast to simulated VAG Mine tailings leachate to help gain insight on how eukaryotic cells exposed to VAG Mine drainage may respond in the mine environment. Affymetrix GeneChip® Yeast Genome 2.0 Arrays were utilized to assess gene expression after 24-h exposure to simulated VAG Mine tailings runoff. The chemistry of mine-tailings leachate, mine-tailings leachate plus yeast extract peptone dextrose media, and control yeast extract peptone dextrose media is also reported. To our knowledge this is the first dataset to assess global gene expression patterns in a eukaryotic model system simulating asbestos mine tailings runoff exposure. Raw and normalized gene expression data are accessible through the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO Database Series GSE89875 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE89875.

  9. Statistical Use of Argonaute Expression and RISC Assembly in microRNA Target Identification

    Science.gov (United States)

    Stanhope, Stephen A.; Sengupta, Srikumar; den Boon, Johan; Ahlquist, Paul; Newton, Michael A.

    2009-01-01

    MicroRNAs (miRNAs) posttranscriptionally regulate targeted messenger RNAs (mRNAs) by inducing cleavage or otherwise repressing their translation. We address the problem of detecting m/miRNA targeting relationships in homo sapiens from microarray data by developing statistical models that are motivated by the biological mechanisms used by miRNAs. The focus of our modeling is the construction, activity, and mediation of RNA-induced silencing complexes (RISCs) competent for targeted mRNA cleavage. We demonstrate that regression models accommodating RISC abundance and controlling for other mediating factors fit the expression profiles of known target pairs substantially better than models based on m/miRNA expressions alone, and lead to verifications of computational target pair predictions that are more sensitive than those based on marginal expression levels. Because our models are fully independent of exogenous results from sequence-based computational methods, they are appropriate for use as either a primary or secondary source of information regarding m/miRNA target pair relationships, especially in conjunction with high-throughput expression studies. PMID:19779550

  10. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  11. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  12. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  13. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  14. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  15. A new efficient statistical test for detecting variability in the gene expression data.

    Science.gov (United States)

    Mathur, Sunil; Dolo, Samuel

    2008-08-01

    DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A

  16. Novel statistical framework to identify differentially expressed genes allowing transcriptomic background differences.

    Science.gov (United States)

    Ling, Zhi-Qiang; Wang, Yi; Mukaisho, Kenichi; Hattori, Takanori; Tatsuta, Takeshi; Ge, Ming-Hua; Jin, Li; Mao, Wei-Min; Sugihara, Hiroyuki

    2010-06-01

    Tests of differentially expressed genes (DEGs) from microarray experiments are based on the null hypothesis that genes that are irrelevant to the phenotype/stimulus are expressed equally in the target and control samples. However, this strict hypothesis is not always true, as there can be several transcriptomic background differences between target and control samples, including different cell/tissue types, different cell cycle stages and different biological donors. These differences lead to increased false positives, which have little biological/medical significance. In this article, we propose a statistical framework to identify DEGs between target and control samples from expression microarray data allowing transcriptomic background differences between these samples by introducing a modified null hypothesis that the gene expression background difference is normally distributed. We use an iterative procedure to perform robust estimation of the null hypothesis and identify DEGs as outliers. We evaluated our method using our own triplicate microarray experiment, followed by validations with reverse transcription-polymerase chain reaction (RT-PCR) and on the MicroArray Quality Control dataset. The evaluations suggest that our technique (i) results in less false positive and false negative results, as measured by the degree of agreement with RT-PCR of the same samples, (ii) can be applied to different microarray platforms and results in better reproducibility as measured by the degree of DEG identification concordance both intra- and inter-platforms and (iii) can be applied efficiently with only a few microarray replicates. Based on these evaluations, we propose that this method not only identifies more reliable and biologically/medically significant DEG, but also reduces the power-cost tradeoff problem in the microarray field. Source code and binaries freely available for download at http://comonca.org.cn/fdca/resources/softwares/deg.zip.

  17. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  18. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  19. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  20. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  1. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Science.gov (United States)

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  2. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  3. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  4. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  5. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  6. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  7. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  8. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  9. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  10. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution

    Directory of Open Access Journals (Sweden)

    Kleinjans Jos

    2008-09-01

    Full Text Available Abstract Background In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. Results In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh, is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. Conclusion CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a

  11. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution.

    Science.gov (United States)

    Moretti, Stefano; van Leeuwen, Danitsja; Gmuender, Hans; Bonassi, Stefano; van Delft, Joost; Kleinjans, Jos; Patrone, Fioravante; Merlo, Domenico Franco

    2008-09-02

    In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that

  12. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  13. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  14. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  15. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  16. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  17. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  18. Two-stage, in silico deconvolution of the lymphocyte compartment of the peripheral whole blood transcriptome in the context of acute kidney allograft rejection.

    Science.gov (United States)

    Shannon, Casey P; Balshaw, Robert; Ng, Raymond T; Wilson-McManus, Janet E; Keown, Paul; McMaster, Robert; McManus, Bruce M; Landsberg, David; Isbel, Nicole M; Knoll, Greg; Tebbutt, Scott J

    2014-01-01

    Acute rejection is a major complication of solid organ transplantation that prevents the long-term assimilation of the allograft. Various populations of lymphocytes are principal mediators of this process, infiltrating graft tissues and driving cell-mediated cytotoxicity. Understanding the lymphocyte-specific biology associated with rejection is therefore critical. Measuring genome-wide changes in transcript abundance in peripheral whole blood cells can deliver a comprehensive view of the status of the immune system. The heterogeneous nature of the tissue significantly affects the sensitivity and interpretability of traditional analyses, however. Experimental separation of cell types is an obvious solution, but is often impractical and, more worrying, may affect expression, leading to spurious results. Statistical deconvolution of the cell type-specific signal is an attractive alternative, but existing approaches still present some challenges, particularly in a clinical research setting. Obtaining time-matched sample composition to biologically interesting, phenotypically homogeneous cell sub-populations is costly and adds significant complexity to study design. We used a two-stage, in silico deconvolution approach that first predicts sample composition to biologically meaningful and homogeneous leukocyte sub-populations, and then performs cell type-specific differential expression analysis in these same sub-populations, from peripheral whole blood expression data. We applied this approach to a peripheral whole blood expression study of kidney allograft rejection. The patterns of differential composition uncovered are consistent with previous studies carried out using flow cytometry and provide a relevant biological context when interpreting cell type-specific differential expression results. We identified cell type-specific differential expression in a variety of leukocyte sub-populations at the time of rejection. The tissue-specificity of these differentially

  19. Two-stage, in silico deconvolution of the lymphocyte compartment of the peripheral whole blood transcriptome in the context of acute kidney allograft rejection.

    Directory of Open Access Journals (Sweden)

    Casey P Shannon

    Full Text Available Acute rejection is a major complication of solid organ transplantation that prevents the long-term assimilation of the allograft. Various populations of lymphocytes are principal mediators of this process, infiltrating graft tissues and driving cell-mediated cytotoxicity. Understanding the lymphocyte-specific biology associated with rejection is therefore critical. Measuring genome-wide changes in transcript abundance in peripheral whole blood cells can deliver a comprehensive view of the status of the immune system. The heterogeneous nature of the tissue significantly affects the sensitivity and interpretability of traditional analyses, however. Experimental separation of cell types is an obvious solution, but is often impractical and, more worrying, may affect expression, leading to spurious results. Statistical deconvolution of the cell type-specific signal is an attractive alternative, but existing approaches still present some challenges, particularly in a clinical research setting. Obtaining time-matched sample composition to biologically interesting, phenotypically homogeneous cell sub-populations is costly and adds significant complexity to study design. We used a two-stage, in silico deconvolution approach that first predicts sample composition to biologically meaningful and homogeneous leukocyte sub-populations, and then performs cell type-specific differential expression analysis in these same sub-populations, from peripheral whole blood expression data. We applied this approach to a peripheral whole blood expression study of kidney allograft rejection. The patterns of differential composition uncovered are consistent with previous studies carried out using flow cytometry and provide a relevant biological context when interpreting cell type-specific differential expression results. We identified cell type-specific differential expression in a variety of leukocyte sub-populations at the time of rejection. The tissue-specificity of

  20. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  1. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Statistical Considerations for Immunohistochemistry Panel Development after Gene Expression Profiling of Human Cancers

    Science.gov (United States)

    Betensky, Rebecca A.; Nutt, Catherine L.; Batchelor, Tracy T.; Louis, David N.

    2005-01-01

    In recent years there have been a number of microarray expression studies in which different types of tumors were classified by identifying a panel of differentially expressed genes. Immunohistochemistry is a practical and robust method for extending gene expression data to common pathological specimens with the advantage of being applicable to paraffin-embedded tissues. However, the number of assays required for successful immunohistochemical classification remains unclear. We propose a simulation-based method for assessing sample size for an immunohistochemistry investigation after a promising gene expression study of human tumors. The goals of such an immunohistochemistry study would be to develop and validate a marker panel that yields improved prognostic classification of cancer patients. We demonstrate how the preliminary gene expression data, coupled with certain realistic assumptions, can be used to estimate the number of immunohistochemical assays required for development. These assumptions are more tenable than alternative assumptions that would be required for crude analytic sample size calculations and that may yield underpowered and inefficient studies. We applied our methods to the design of an immunohistochemistry study for glioma classification and estimated the number of assays required to ensure satisfactory technical and prognostic validation. Simulation approaches for computing power and sample size that are based on existing gene expression data provide a powerful tool for efficient design of follow-up genomic studies. PMID:15858152

  3. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Directory of Open Access Journals (Sweden)

    Ujjwal Maulik

    Full Text Available Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution. The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post

  4. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    Science.gov (United States)

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

  5. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  6. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  7. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  8. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  9. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  10. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  11. A novel statistical algorithm for gene expression analysis helps differentiate pregnane X receptor-dependent and independent mechanisms of toxicity.

    Directory of Open Access Journals (Sweden)

    M Ann Mongan

    Full Text Available Genome-wide gene expression profiling has become standard for assessing potential liabilities as well as for elucidating mechanisms of toxicity of drug candidates under development. Analysis of microarray data is often challenging due to the lack of a statistical model that is amenable to biological variation in a small number of samples. Here we present a novel non-parametric algorithm that requires minimal assumptions about the data distribution. Our method for determining differential expression consists of two steps: 1 We apply a nominal threshold on fold change and platform p-value to designate whether a gene is differentially expressed in each treated and control sample relative to the averaged control pool, and 2 We compared the number of samples satisfying criteria in step 1 between the treated and control groups to estimate the statistical significance based on a null distribution established by sample permutations. The method captures group effect without being too sensitive to anomalies as it allows tolerance for potential non-responders in the treatment group and outliers in the control group. Performance and results of this method were compared with the Significant Analysis of Microarrays (SAM method. These two methods were applied to investigate hepatic transcriptional responses of wild-type (PXR(+/+ and pregnane X receptor-knockout (PXR(-/- mice after 96 h exposure to CMP013, an inhibitor of β-secretase (β-site of amyloid precursor protein cleaving enzyme 1 or BACE1. Our results showed that CMP013 led to transcriptional changes in hallmark PXR-regulated genes and induced a cascade of gene expression changes that explained the hepatomegaly observed only in PXR(+/+ animals. Comparison of concordant expression changes between PXR(+/+ and PXR(-/- mice also suggested a PXR-independent association between CMP013 and perturbations to cellular stress, lipid metabolism, and biliary transport.

  12. Linnorm: improved statistical analysis for single cell RNA-seq expression data.

    Science.gov (United States)

    Yip, Shun H; Wang, Panwen; Kocher, Jean-Pierre A; Sham, Pak Chung; Wang, Junwen

    2017-12-15

    Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  14. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  15. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  16. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  17. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  18. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  19. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  20. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  1. Self-Organization of Genome Expression from Embryo to Terminal Cell Fate: Single-Cell Statistical Mechanics of Biological Regulation

    Directory of Open Access Journals (Sweden)

    Alessandro Giuliani

    2017-12-01

    Full Text Available A statistical mechanical mean-field approach to the temporal development of biological regulation provides a phenomenological, but basic description of the dynamical behavior of genome expression in terms of autonomous self-organization with a critical transition (Self-Organized Criticality: SOC. This approach reveals the basis of self-regulation/organization of genome expression, where the extreme complexity of living matter precludes any strict mechanistic approach. The self-organization in SOC involves two critical behaviors: scaling-divergent behavior (genome avalanche and sandpile-type critical behavior. Genome avalanche patterns—competition between order (scaling and disorder (divergence reflect the opposite sequence of events characterizing the self-organization process in embryo development and helper T17 terminal cell differentiation, respectively. On the other hand, the temporal development of sandpile-type criticality (the degree of SOC control in mouse embryo suggests the existence of an SOC control landscape with a critical transition state (i.e., the erasure of zygote-state criticality. This indicates that a phase transition of the mouse genome before and after reprogramming (immediately after the late 2-cell state occurs through a dynamical change in a control parameter. This result provides a quantitative open-thermodynamic appreciation of the still largely qualitative notion of the epigenetic landscape. Our results suggest: (i the existence of coherent waves of condensation/de-condensation in chromatin, which are transmitted across regions of different gene-expression levels along the genome; and (ii essentially the same critical dynamics we observed for cell-differentiation processes exist in overall RNA expression during embryo development, which is particularly relevant because it gives further proof of SOC control of overall expression as a universal feature.

  2. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  3. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Application of blind deconvolution with crest factor for recovery of original rolling element bearing defect signals

    International Nuclear Information System (INIS)

    Son, J. D.; Yang, B. S.; Tan, A. C. C.; Mathew, J.

    2004-01-01

    Many machine failures are not detected well in advance due to the masking of background noise and attenuation of the source signal through the transmission mediums. Advanced signal processing techniques using adaptive filters and higher order statistics have been attempted to extract the source signal from the measured data at the machine surface. In this paper, blind deconvolution using the Eigenvector Algorithm (EVA) technique is used to recover a damaged bearing signal using only the measured signal at the machine surface. A damaged bearing signal corrupted by noise with varying signal-to-noise (s/n) was used to determine the effectiveness of the technique in detecting an incipient signal and the optimum choice of filter length. The results show that the technique is effective in detecting the source signal with an s/n ratio as low as 0.21, but requires a relatively large filter length

  5. A method for express estimation of the octane number of gasoline using a portable spectroimpedance meter and statistical analysis methods

    Directory of Open Access Journals (Sweden)

    Mamykin A. V.

    2017-10-01

    Full Text Available The authors propose a method for determination of the electro-physical characteristics of electrical insulating liquids on the example of different types of gasoline. The method is based on the spectral impedance measurements of a capacitor electrochemical cell filled with the liquid under study. The application of sinusoidal test voltage in the frequency range of 0,1—10 Hz provides more accurate measurements in comparison with known traditional methods. A portable device for measuring total electrical resistance (impedance of dielectric liquids was designed and constructed. An approach for express estimation of octane number of automobile gasoline using spectroimpedance measurements and statistical multi variation methods of data analysis has been proposed and tested.

  6. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  7. A statistical method for predicting splice variants between two groups of samples using GeneChip® expression array data

    Directory of Open Access Journals (Sweden)

    Olson James M

    2006-04-01

    Full Text Available Abstract Background Alternative splicing of pre-messenger RNA results in RNA variants with combinations of selected exons. It is one of the essential biological functions and regulatory components in higher eukaryotic cells. Some of these variants are detectable with the Affymetrix GeneChip® that uses multiple oligonucleotide probes (i.e. probe set, since the target sequences for the multiple probes are adjacent within each gene. Hybridization intensity from a probe correlates with abundance of the corresponding transcript. Although the multiple-probe feature in the current GeneChip® was designed to assess expression values of individual genes, it also measures transcriptional abundance for a sub-region of a gene sequence. This additional capacity motivated us to develop a method to predict alternative splicing, taking advance of extensive repositories of GeneChip® gene expression array data. Results We developed a two-step approach to predict alternative splicing from GeneChip® data. First, we clustered the probes from a probe set into pseudo-exons based on similarity of probe intensities and physical adjacency. A pseudo-exon is defined as a sequence in the gene within which multiple probes have comparable probe intensity values. Second, for each pseudo-exon, we assessed the statistical significance of the difference in probe intensity between two groups of samples. Differentially expressed pseudo-exons are predicted to be alternatively spliced. We applied our method to empirical data generated from GeneChip® Hu6800 arrays, which include 7129 probe sets and twenty probes per probe set. The dataset consists of sixty-nine medulloblastoma (27 metastatic and 42 non-metastatic samples and four cerebellum samples as normal controls. We predicted that 577 genes would be alternatively spliced when we compared normal cerebellum samples to medulloblastomas, and predicted that thirteen genes would be alternatively spliced when we compared metastatic

  8. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  9. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  10. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  11. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  12. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  13. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  14. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  15. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  16. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  17. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  18. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  19. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  20. A HOS-based blind deconvolution algorithm for the improvement of time resolution of mixed phase low SNR seismic data

    International Nuclear Information System (INIS)

    Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M

    2009-01-01

    A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six

  1. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  2. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  3. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  4. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  5. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  6. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  7. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  8. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  9. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  10. Increasing the darkfield contrast-to-noise ratio using a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging.

    Science.gov (United States)

    Weber, Thomas; Pelzer, Georg; Bayer, Florian; Horn, Florian; Rieger, Jens; Ritter, André; Zang, Andrea; Durst, Jürgen; Anton, Gisela; Michel, Thilo

    2013-07-29

    A novel information retrieval algorithm for X-ray grating-based phase-contrast imaging based on the deconvolution of the object and the reference phase stepping curve (PSC) as proposed by Modregger et al. was investigated in this paper. We applied the method for the first time on data obtained with a polychromatic spectrum and compared the results to those, received by applying the commonly used method, based on a Fourier analysis. We confirmed the expectation, that both methods deliver the same results for the absorption and the differential phase image. For the darkfield image, a mean contrast-to-noise ratio (CNR) increase by a factor of 1.17 using the new method was found. Furthermore, the dose saving potential was estimated for the deconvolution method experimentally. It is found, that for the conventional method a dose which is higher by a factor of 1.66 is needed to obtain a similar CNR value compared to the novel method. A further analysis of the data revealed, that the improvement in CNR and dose efficiency is due to the superior background noise properties of the deconvolution method, but at the cost of comparability between measurements at different applied dose values, as the mean value becomes dependent on the photon statistics used.

  11. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    Science.gov (United States)

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  12. Statistical deconvolution of enthalpic energetic contributions to MHC-peptide binding affinity

    Directory of Open Access Journals (Sweden)

    Drew Michael GB

    2006-03-01

    Full Text Available Abstract Background MHC Class I molecules present antigenic peptides to cytotoxic T cells, which forms an integral part of the adaptive immune response. Peptides are bound within a groove formed by the MHC heavy chain. Previous approaches to MHC Class I-peptide binding prediction have largely concentrated on the peptide anchor residues located at the P2 and C-terminus positions. Results A large dataset comprising MHC-peptide structural complexes was created by re-modelling pre-determined x-ray crystallographic structures. Static energetic analysis, following energy minimisation, was performed on the dataset in order to characterise interactions between bound peptides and the MHC Class I molecule, partitioning the interactions within the groove into van der Waals, electrostatic and total non-bonded energy contributions. Conclusion The QSAR techniques of Genetic Function Approximation (GFA and Genetic Partial Least Squares (G/PLS algorithms were used to identify key interactions between the two molecules by comparing the calculated energy values with experimentally-determined BL50 data. Although the peptide termini binding interactions help ensure the stability of the MHC Class I-peptide complex, the central region of the peptide is also important in defining the specificity of the interaction. As thermodynamic studies indicate that peptide association and dissociation may be driven entropically, it may be necessary to incorporate entropic contributions into future calculations.

  13. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  14. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  15. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  16. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  17. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  18. Blind deconvolution of seismograms regularized via minimum support

    International Nuclear Information System (INIS)

    Royer, A A; Bostock, M G; Haber, E

    2012-01-01

    The separation of earthquake source signature and propagation effects (the Earth’s ‘Green’s function’) that encode a seismogram is a challenging problem in seismology. The task of separating these two effects is called blind deconvolution. By considering seismograms of multiple earthquakes from similar locations recorded at a given station and that therefore share the same Green’s function, we may write a linear relation in the time domain u i (t)*s j (t) − u j (t)*s i (t) = 0, where u i (t) is the seismogram for the ith source and s j (t) is the jth unknown source. The symbol * represents the convolution operator. From two or more seismograms, we obtain a homogeneous linear system where the unknowns are the sources. This system is subject to a scaling constraint to deliver a non-trivial solution. Since source durations are not known a priori and must be determined, we augment our system by introducing the source durations as unknowns and we solve the combined system (sources and source durations) using separation of variables. Our solution is derived using direct linear inversion to recover the sources and Newton’s method to recover source durations. This method is tested using two sets of synthetic seismograms created by convolution of (i) random Gaussian source-time functions and (ii) band-limited sources with a simplified Green’s function and signal to noise levels up to 10% with encouraging results. (paper)

  19. ArraySolver: An Algorithm for Colour-Coded Graphical Display and Wilcoxon Signed-Rank Statistics for Comparing Microarray Gene Expression Data

    OpenAIRE

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for tra...

  20. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    Directory of Open Access Journals (Sweden)

    Ma Ligeng

    2003-11-01

    Full Text Available Abstract Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i upload and populate microarray data into a database; (ii integrate gene expression with enzymes of the pathways; (iii generate pathway diagrams without building image files manually; (iv visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i automatic generation of pathways associated with gene expression and (ii statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s.

  1. ArraySolver: an algorithm for colour-coded graphical display and Wilcoxon signed-rank statistics for comparing microarray gene expression data.

    Science.gov (United States)

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.

  2. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  3. Use of new spectral analysis methods in gamma spectra deconvolution

    International Nuclear Information System (INIS)

    Pinault, J.L.

    1991-01-01

    A general deconvolution method applicable to X and gamma ray spectrometry is proposed. Using new spectral analysis methods, it is applied to an actual case: the accurate on-line analysis of three elements (Ca, Si, Fe) in a cement plant using neutron capture gamma rays. Neutrons are provided by a low activity (5 μg) 252 Cf source; the detector is a BGO 3 in.x8 in. scintillator. The principle of the methods rests on the Fourier transform of the spectrum. The search for peaks and determination of peak areas are worked out in the Fourier representation, which enables separation of background and peaks and very efficiently discriminates peaks, or elements represented by several peaks. First the spectrum is transformed so that in the new representation the full width at half maximum (FWHM) is independent of energy. Thus, the spectrum is arranged symmetrically and transformed into the Fourier representation. The latter is multiplied by a function in order to transform original Gaussian into Lorentzian peaks. An autoregressive filter is calculated, leading to a characteristic polynomial whose complex roots represent both the location and the width of each peak, provided that the absolute value is lower than unit. The amplitude of each component (the area of each peak or the sum of areas of peaks characterizing an element) is fitted by the weighted least squares method, taking into account that errors in spectra are independent and follow a Poisson law. Very accurate results are obtained, which would be hard to achieve by other methods. The DECO FORTRAN code has been developed for compatible PC microcomputers. Some features of the code are given. (orig.)

  4. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  5. Deconvolution of the tree ring based delta13C record

    International Nuclear Information System (INIS)

    Peng, T.; Broecker, W.S.; Freyer, H.D.; Trumbore, S.

    1983-01-01

    We assumed that the tree-ring based 13 C/ 12 C record constructed by Freyer and Belacy (1983) to be representative of the fossil fuel and forest-soil induced 13 C/ 12 C change for atmospheric CO 2 . Through the use of a modification of the Oeschger et al. ocean model, we have computed the contribution of the combustion of coal, oil, and natural gas to this observed 13 C/ 12 C change. A large residual remains when the tree-ring-based record is corrected for the contribution of fossil fuel CO 2 . A deconvolution was performed on this residual to determine the time history and magnitude of the forest-soil reservoir changes over the past 150 years. Several important conclusions were reached. (1) The magnitude of the integrated CO 2 input from these sources was about 1.6 times that from fossil fuels. (2) The forest-soil contribution reached a broad maximum centered at about 1900. (3) Over the 2 decade period covered by the Mauna Loa atmospheric CO 2 content record, the input from forests and soils was about 30% that from fossil fuels. (4) The 13 C/ 12 C trend over the last 20 years was dominated by the input of fossil fuel CO 2 . (5) The forest-soil release did not contribute significantly to the secular increase in atmospheric CO 2 observed over the last 20 years. (6) The pre-1850 atmospheric p2 values must have been in the range 245 to 270 x 10 -6 atmospheres

  6. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  7. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  8. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  9. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  10. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  11. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  12. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  14. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  15. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  16. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  17. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  18. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  19. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  20. Optimization of Soluble Expression and Purification of Recombinant Human Rhinovirus Type-14 3C Protease Using Statistically Designed Experiments: Isolation and Characterization of the Enzyme.

    Science.gov (United States)

    Antoniou, Georgia; Papakyriacou, Irineos; Papaneophytou, Christos

    2017-10-01

    Human rhinovirus (HRV) 3C protease is widely used in recombinant protein production for various applications such as biochemical characterization and structural biology projects to separate recombinant fusion proteins from their affinity tags in order to prevent interference between these tags and the target proteins. Herein, we report the optimization of expression and purification conditions of glutathione S-transferase (GST)-tagged HRV 3C protease by statistically designed experiments. Soluble expression of GST-HRV 3C protease was initially optimized by response surface methodology (RSM), and a 5.5-fold increase in enzyme yield was achieved. Subsequently, we developed a new incomplete factorial (IF) design that examines four variables (bacterial strain, expression temperature, induction time, and inducer concentration) in a single experiment. The new design called Incomplete Factorial-Strain/Temperature/Time/Inducer (IF-STTI) was validated using three GST-tagged proteins. In all cases, IF-STTI resulted in only 10% lower expression yields than those obtained by RSM. Purification of GST-HRV 3C was optimized by an IF design that examines simultaneously the effect of the amount of resin, incubation time of cell lysate with resin, and glycerol and DTT concentration in buffers, and a further 15% increase in protease recovery was achieved. Purified GST-HRV 3C protease was active at both 4 and 25 °C in a variety of buffers.

  1. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  2. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  3. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  4. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  5. Expression

    Directory of Open Access Journals (Sweden)

    Wang-Xia Wang

    2014-02-01

    Full Text Available The miR-15/107 family comprises a group of 10 paralogous microRNAs (miRNAs, sharing a 5′ AGCAGC sequence. These miRNAs have overlapping targets. In order to characterize the expression of miR-15/107 family miRNAs, we employed customized TaqMan Low-Density micro-fluid PCR-array to investigate the expression of miR-15/107 family members, and other selected miRNAs, in 11 human tissues obtained at autopsy including the cerebral cortex, frontal cortex, primary visual cortex, thalamus, heart, lung, liver, kidney, spleen, stomach and skeletal muscle. miR-103, miR-195 and miR-497 were expressed at similar levels across various tissues, whereas miR-107 is enriched in brain samples. We also examined the expression patterns of evolutionarily conserved miR-15/107 miRNAs in three distinct primary rat brain cell preparations (enriched for cortical neurons, astrocytes and microglia, respectively. In primary cultures of rat brain cells, several members of the miR-15/107 family are enriched in neurons compared to other cell types in the central nervous system (CNS. In addition to mature miRNAs, we also examined the expression of precursors (pri-miRNAs. Our data suggested a generally poor correlation between the expression of mature miRNAs and their precursors. In summary, we provide a detailed study of the tissue and cell type-specific expression profile of this highly expressed and phylogenetically conserved family of miRNA genes.

  6. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  7. Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Directory of Open Access Journals (Sweden)

    Turk Rolf

    2006-04-01

    Full Text Available Abstract Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it

  8. Punctuated Equilibrium in Statistical Models of Generalized Coevolutionary Resilience: How Sudden Ecosystem Transitions Can Entrain Both Phenotype Expression and Darwinian Selection

    Science.gov (United States)

    Wallace, Rodrick; Wallace, Deborah

    We argue that mesoscale ecosystem resilience shifts akin to sudden phase transitions in physical systems can entrain similarly punctuated events of gene expression on more rapid time scales, and, in part through such means, slower changes induced by selection pressure, triggering punctuated equilibrium Darwinian evolutionary transitions on geologic time scales. The approach reduces ecosystem, gene expression, and Darwinian genetic dynamics to a least common denominator of information sources interacting by crosstalk at markedly differing rates. Pettini's 'topological hypothesis', via a homology between information source uncertainty and free energy density, generates a regression-like class of statistical models of sudden coevolutionary phase transition based on the Rate Distortion and Shannon-McMillan Theorems of information theory which links all three levels. A mathematical treatment of Holling's extended keystone hypothesis regarding the particular role of mesoscale phenomena in entraining both slower and faster dynamical structures produces the result. A main theme is the necessity of a cognitive paradigm for gene expression, mirroring I. Cohen's cognitive approach to immune function. Invocation of the necessary conditions imposed by the asymptotic limit theorems of communication theory enables us to penetrate one layer more deeply before needing to impose an empirically-derived phenomenological system of 'Onsager relation' recursive coevolutionary stochastic differential equations. Extending the development to second order via a large deviations argument permits modeling the influence of human cultural structures on ecosystems as 'farming'.

  9. Deconvolution of the thermoluminescent emission curve. Second order kinetics

    International Nuclear Information System (INIS)

    Moreno y M, A.; Moreno B, A.

    1999-01-01

    In this work it is described the Randall and Wilkins second order kinetics in Microsoft Excel language, which allows its expression as the sum of Gaussian and the correction factors corresponding. These factors are obtained of the differences between the real thermoluminescent curve and the Gaussian proposed. The results obtained justify the Gaussian expression added to the correction factor. (Author)

  10. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  11. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  12. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  13. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  14. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  15. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  16. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  17. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  18. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  19. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  20. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  1. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  2. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  3. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  4. EXPRESS

    International Nuclear Information System (INIS)

    Ancelin, C.; Le, P.; DeSaint-Quentin, S.; Villatte, N.

    1987-01-01

    This paper presents EXPRESS, an expert system developed for the automation of reliability studies. The first part consists in the description of the method for static thermohydraulic systems. In this step, the authors define the knowledge representation based on the two inference engines - ALOUETTE and LCR developed by EDF. They explain all the process to construct a fault tree from a topological and functional description of the system. Numerous examples are exhibited in illustration of the method. This is followed by the lessons derived from the studies performed on some safety systems of the PALUEL nuclear plant. The development of the same approach for electric power systems is described, insisting on the difference resulting from the sequential nature of these systems. Finally, they show the main advantages identified during the studies

  5. Deconvoluting complex tissues for expression quantitative trait locus-based analyses

    DEFF Research Database (Denmark)

    Seo, Ji-Heui; Li, Qiyuan; Fatima, Aquila

    2013-01-01

    Breast cancer genome-wide association studies have pinpointed dozens of variants associated with breast cancer pathogenesis. The majority of risk variants, however, are located outside of known protein-coding regions. Therefore, identifying which genes the risk variants are acting through present...

  6. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  7. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  8. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  9. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  10. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  11. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  12. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  13. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  14. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  15. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  16. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  17. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  18. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  19. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  20. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  1. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  2. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  3. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  4. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  5. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  6. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  7. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  8. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  9. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  10. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  11. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  12. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  13. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  14. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  15. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  16. Enriching the gene set analysis of genome-wide data by incorporating directionality of gene expression and combining statistical hypotheses and methods

    Science.gov (United States)

    Väremo, Leif; Nielsen, Jens; Nookaew, Intawat

    2013-01-01

    Gene set analysis (GSA) is used to elucidate genome-wide data, in particular transcriptome data. A multitude of methods have been proposed for this step of the analysis, and many of them have been compared and evaluated. Unfortunately, there is no consolidated opinion regarding what methods should be preferred, and the variety of available GSA software and implementations pose a difficulty for the end-user who wants to try out different methods. To address this, we have developed the R package Piano that collects a range of GSA methods into the same system, for the benefit of the end-user. Further on we refine the GSA workflow by using modifications of the gene-level statistics. This enables us to divide the resulting gene set P-values into three classes, describing different aspects of gene expression directionality at gene set level. We use our fully implemented workflow to investigate the impact of the individual components of GSA by using microarray and RNA-seq data. The results show that the evaluated methods are globally similar and the major separation correlates well with our defined directionality classes. As a consequence of this, we suggest to use a consensus scoring approach, based on multiple GSA runs. In combination with the directionality classes, this constitutes a more thorough basis for an enriched biological interpretation. PMID:23444143

  17. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  18. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    Science.gov (United States)

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  19. Analysis of gene expression profiles of soft tissue sarcoma using a combination of knowledge-based filtering with integration of multiple statistics.

    Directory of Open Access Journals (Sweden)

    Anna Takahashi

    Full Text Available The diagnosis and treatment of soft tissue sarcomas (STS have been difficult. Of the diverse histological subtypes, undifferentiated pleomorphic sarcoma (UPS is particularly difficult to diagnose accurately, and its classification per se is still controversial. Recent advances in genomic technologies provide an excellent way to address such problems. However, it is often difficult, if not impossible, to identify definitive disease-associated genes using genome-wide analysis alone, primarily because of multiple testing problems. In the present study, we analyzed microarray data from 88 STS patients using a combination method that used knowledge-based filtering and a simulation based on the integration of multiple statistics to reduce multiple testing problems. We identified 25 genes, including hypoxia-related genes (e.g., MIF, SCD1, P4HA1, ENO1, and STAT1 and cell cycle- and DNA repair-related genes (e.g., TACC3, PRDX1, PRKDC, and H2AFY. These genes showed significant differential expression among histological subtypes, including UPS, and showed associations with overall survival. STAT1 showed a strong association with overall survival in UPS patients (logrank p = 1.84 × 10(-6 and adjusted p value 2.99 × 10(-3 after the permutation test. According to the literature, the 25 genes selected are useful not only as markers of differential diagnosis but also as prognostic/predictive markers and/or therapeutic targets for STS. Our combination method can identify genes that are potential prognostic/predictive factors and/or therapeutic targets in STS and possibly in other cancers. These disease-associated genes deserve further preclinical and clinical validation.

  20. A unified approach to deconvolution radiation spectra measured by radiochromic films

    CERN Document Server

    Stancic, V; Ljubenov, V

    2002-01-01

    A method for the evaluation of energy distribution of a radiation source on the basis of measured space distribution of deposited energy is proposed. The measured data were obtained by using radiochromic films. Mathematical modeling is defined as a Fredholm integral equation inversion problem. Negative solutions were treated as an additional condition expressed through undefined energy group boundaries, caused by virtue of the physical phenomenon of statistical uncertainty. Examples are given of the electron source and neutron radiation field.

  1. Statistical symmetries in physics

    International Nuclear Information System (INIS)

    Green, H.S.; Adelaide Univ., SA

    1994-01-01

    Every law of physics is invariant under some group of transformations and is therefore the expression of some type of symmetry. Symmetries are classified as geometrical, dynamical or statistical. At the most fundamental level, statistical symmetries are expressed in the field theories of the elementary particles. This paper traces some of the developments from the discovery of Bose statistics, one of the two fundamental symmetries of physics. A series of generalizations of Bose statistics is described. A supersymmetric generalization accommodates fermions as well as bosons, and further generalizations, including parastatistics, modular statistics and graded statistics, accommodate particles with properties such as 'colour'. A factorization of elements of ggl(n b ,n f ) can be used to define truncated boson operators. A general construction is given for q-deformed boson operators, and explicit constructions of the same type are given for various 'deformed' algebras. A summary is given of some of the applications and potential applications. 39 refs., 2 figs

  2. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  3. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  4. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  5. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  6. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  7. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  8. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  9. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  10. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  11. Usage Statistics

    Science.gov (United States)

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

  12. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  13. Frog Statistics

    Science.gov (United States)

    Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

  14. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  15. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  16. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  17. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  18. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  19. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  20. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  1. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  2. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  3. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  4. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  5. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  6. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  7. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  8. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  9. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  10. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics, as given by Haldane, allows for a statistical interaction between distinguishable particles (multi-species statistics). The thermodynamic quantities for such statistics ca be evaluated exactly. The explicit expressions for the cluster coefficients are presented. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models. The interesting questions of generalizing this correspondence onto the higher-dimensional and the multi-species cases remain essentially open

  11. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  12. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  13. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  14. Harmonic statistics

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  15. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  16. Histoplasmosis Statistics

    Science.gov (United States)

    ... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...

  17. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  18. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  19. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  20. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  1. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  2. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  3. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  4. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  5. Descriptive statistics.

    Science.gov (United States)

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  6. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  7. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...

  8. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  9. The deconvolution of sputter-etching surface concentration measurements to determine impurity depth profiles

    International Nuclear Information System (INIS)

    Carter, G.; Katardjiev, I.V.; Nobes, M.J.

    1989-01-01

    The quasi-linear partial differential continuity equations that describe the evolution of the depth profiles and surface concentrations of marker atoms in kinematically equivalent systems undergoing sputtering, ion collection and atomic mixing are solved using the method of characteristics. It is shown how atomic mixing probabilities can be deduced from measurements of ion collection depth profiles with increasing ion fluence, and how this information can be used to predict surface concentration evolution. Even with this information, however, it is shown that it is not possible to deconvolute directly the surface concentration measurements to provide initial depth profiles, except when only ion collection and sputtering from the surface layer alone occur. It is demonstrated further that optimal recovery of initial concentration depth profiles could be ensured if the concentration-measuring analytical probe preferentially sampled depths near and at the maximum depth of bombardment-induced perturbations. (author)

  10. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  11. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  12. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  13. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  14. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  15. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  16. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  17. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  18. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  19. Measurement and deconvolution of detector response time for short HPM pulses: Part 1, Microwave diodes

    International Nuclear Information System (INIS)

    Bolton, P.R.

    1987-06-01

    A technique is described for measuring and deconvolving response times of microwave diode detection systems in order to generate corrected input signals typical of an infinite detection rate. The method has been applied to cases of 2.86 GHz ultra-short HPM pulse detection where pulse rise time is comparable to that of the detector; whereas, the duration of a few nanoseconds is significantly longer. Results are specified in terms of the enhancement of equivalent deconvolved input voltages for given observed voltages. The convolution integral imposes the constraint of linear detector response to input power levels. This is physically equivalent to the conservation of integrated pulse energy in the deconvolution process. The applicable dynamic range of a microwave diode is therefore limited to a smaller signal region as determined by its calibration

  20. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  1. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  3. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  4. Statistically sound evaluation of trace element depth profiles by ion beam analysis

    International Nuclear Information System (INIS)

    Schmid, K.; Toussaint, U. von

    2012-01-01

    This paper presents the underlying physics and statistical models that are used in the newly developed program NRADC for fully automated deconvolution of trace level impurity depth profiles from ion beam data. The program applies Bayesian statistics to find the most probable depth profile given ion beam data measured at different energies and angles for a single sample. Limiting the analysis to % level amounts of material allows one to linearize the forward calculation of ion beam data which greatly improves the computation speed. This allows for the first time to apply the maximum likelihood approach to both the fitting of the experimental data and the determination of confidence intervals of the depth profiles for real world applications. The different steps during the automated deconvolution will be exemplified by applying the program to artificial and real experimental data.

  5. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  6. Statistical Physics

    CERN Document Server

    Wannier, Gregory Hugh

    1966-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  7. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  8. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  9. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  10. Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  11. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  12. WPRDC Statistics

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.

  13. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  14. Gonorrhea Statistics

    Science.gov (United States)

    ... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...

  15. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  16. Vital statistics

    CERN Document Server

    MacKenzie, Dana

    2004-01-01

    The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).

  17. Influence of the beam divergence on the quality neutron radiographic images improved by Richardson-Lucy deconvolution

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2010-01-01

    Full text: Images produced by radiation transmission, as many others, are affected by disturbances caused by random and systematic uncertainties. Those caused by noise or statistical dispersion can be diminished by a filtering procedure which eliminates high-frequencies associated to the noise, but unfortunately also those belonging to the signal itself. Systematic uncertainties, in principle, could be more effectively removed if one knows the spoiling convolution function causing the degradation of the image. This function depends upon the detector resolution and the non-punctual character of the source employed in the acquisition, which blur the image making a single point to appear as a spot with a vanishing edge. For an extended source, exhibiting however a reasonable parallel beam, the penumbra degrading the image would be caused by the unavoidable beam divergence. In both cases, the essential information to improve the degraded image is the law of transformation of a single point into a blurred spot, known as point spread function-PSF. Even for an isotropic system, where this function would have a symmetric bell-like shape, it is very difficult to obtain experimentally and to apply it to the data processing. For this reason it is usually replaced by an approximated analytical function such as a Gaussian or Lorentzian. In this work, the Richardson-Lucy deconvoultion has been applied to ameliorate thermal neutron radiographic images acquired with imaging plates using a Gaussian PSF as deconvolutor. Due to the divergence of the neutron beam, reaching 1 deg 16', the penumbra affecting the final image depends upon the gap object-detector. Moreover, even if the object were placed in direct contact with the detector the non-zero dimension of the object along the beam path would produce penumbrae of different magnitudes, i.e., the spatial resolution of the system would be dependent upon the object-detector arrangement. This means that the width of the PSF increases

  18. Statistical optics

    Science.gov (United States)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  19. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  20. Statistical mechanics

    CERN Document Server

    Jana, Madhusudan

    2015-01-01

    Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

  1. Statistical physics

    CERN Document Server

    Guénault, Tony

    2007-01-01

    In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...

  2. Statistical Physics

    CERN Document Server

    Mandl, Franz

    1988-01-01

    The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient

  3. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  4. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  5. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  6. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  7. Statistical Computing

    Indian Academy of Sciences (India)

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  8. Statistical thermodynamics

    CERN Document Server

    Schrödinger, Erwin

    1952-01-01

    Nobel Laureate's brilliant attempt to develop a simple, unified standard method of dealing with all cases of statistical thermodynamics - classical, quantum, Bose-Einstein, Fermi-Dirac, and more.The work also includes discussions of Nernst theorem, Planck's oscillator, fluctuations, the n-particle problem, problem of radiation, much more.

  9. A novel deconvolution method for modeling UDP-N-acetyl-D-glucosamine biosynthetic pathways based on 13C mass isotopologue profiles under non-steady-state conditions

    Directory of Open Access Journals (Sweden)

    Belshoff Alex C

    2011-05-01

    against experimental data. The reproducibility and robustness of the deconvolution were verified by replicate experiments, extensive statistical analyses, and cross-validation against NMR data. Conclusions This computational approach revealed the relative fluxes through the different biosynthetic pathways of UDP-GlcNAc, which comprises simultaneous sequential and parallel reactions, providing new insight into the regulation of UDP-GlcNAc levels and O-linked protein glycosylation. This is the first such analysis of UDP-GlcNAc dynamics, and the approach is generally applicable to other complex metabolites comprising distinct metabolic subunits, where sufficient numbers of isotopologues can be unambiguously resolved and accurately measured.

  10. Transplantation of epiphytic bioaccumulators (Tillandsia capillaris) for high spatial resolution biomonitoring of trace elements and point sources deconvolution in a complex mining/smelting urban context

    Science.gov (United States)

    Goix, Sylvaine; Resongles, Eléonore; Point, David; Oliva, Priscia; Duprey, Jean Louis; de la Galvez, Erika; Ugarte, Lincy; Huayta, Carlos; Prunier, Jonathan; Zouiten, Cyril; Gardon, Jacques

    2013-12-01

    Monitoring atmospheric trace elements (TE) levels and tracing their source origin is essential for exposure assessment and human health studies. Epiphytic Tillandsia capillaris plants were used as bioaccumulator of TE in a complex polymetallic mining/smelting urban context (Oruro, Bolivia). Specimens collected from a pristine reference site were transplanted at a high spatial resolution (˜1 sample/km2) throughout the urban area. About twenty-seven elements were measured after a 4-month exposure, also providing new information values for reference material BCR482. Statistical power analysis for this biomonitoring mapping approach against classical aerosols surveys performed on the same site showed the better aptitude of T. Capillaris to detect geographical trend, and to deconvolute multiple contamination sources using geostatistical principal component analysis. Transplanted specimens in the vicinity of the mining and smelting areas were characterized by extreme TE accumulation (Sn > Ag > Sb > Pb > Cd > As > W > Cu > Zn). Three contamination sources were identified: mining (Ag, Pb, Sb), smelting (As, Sn) and road traffic (Zn) emissions, confirming results of previous aerosol survey.

  11. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  12. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  13. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  14. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Kuo Men

    2017-12-01

    Full Text Available BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC. It requires exact delineation of the nasopharynx gross tumor volume (GTVnx, the metastatic lymph node gross tumor volume (GTVnd, the clinical target volume (CTV, and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy

  15. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images.

    Science.gov (United States)

    Men, Kuo; Chen, Xinyuan; Zhang, Ye; Zhang, Tao; Dai, Jianrong; Yi, Junlin; Li, Yexiong

    2017-01-01

    Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a

  16. The deconvolution of Doppler-broadened positron annihilation measurements using fast Fourier transforms and power spectral analysis

    International Nuclear Information System (INIS)

    Schaffer, J.P.; Shaughnessy, E.J.; Jones, P.L.

    1984-01-01

    A deconvolution procedure which corrects Doppler-broadened positron annihilation spectra for instrument resolution is described. The method employs fast Fourier transforms, is model independent, and does not require iteration. The mathematical difficulties associated with the incorrectly posed first order Fredholm integral equation are overcome by using power spectral analysis to select a limited number of low frequency Fourier coefficients. The FFT/power spectrum method is then demonstrated for an irradiated high purity single crystal sapphire sample. (orig.)

  17. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  18. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  19. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Brett [Institute of Biomedical Studies, Baylor University, Waco, TX 76798 (United States); Neumann, Elizabeth K. [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States); Stow, Sarah M.; May, Jody C.; McLean, John A. [Department of Chemistry, Vanderbilt University, Nashville, TN 37235 (United States); Vanderbilt Institute of Chemical Biology, Nashville, TN 37235 (United States); Vanderbilt Institute for Integrative Biosystems Research and Education, Nashville, TN 37235 (United States); Center for Innovative Technology, Nashville, TN 37235 (United States); Solouki, Touradj, E-mail: Touradj_Solouki@baylor.edu [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States)

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.{sub 8} Å{sup 2}, 295.{sub 1} Å{sup 2}, 296.{sub 8} Å{sup 2}, and 300.{sub 1} Å{sup 2}; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  20. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    International Nuclear Information System (INIS)

    Harper, Brett; Neumann, Elizabeth K.; Stow, Sarah M.; May, Jody C.; McLean, John A.; Solouki, Touradj

    2016-01-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288._8 Å"2, 295._1 Å"2, 296._8 Å"2, and 300._1 Å"2; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  1. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  2. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  3. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  4. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  5. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  6. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  7. Energy Statistics

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources

  8. Statistical Optics

    Science.gov (United States)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  9. Statistical utilitarianism

    OpenAIRE

    Pivato, Marcus

    2013-01-01

    We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...

  10. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  11. Deconvolution of Voltage Sensor Time Series and Electro-diffusion Modeling Reveal the Role of Spine Geometry in Controlling Synaptic Strength.

    Science.gov (United States)

    Cartailler, Jerome; Kwon, Taekyung; Yuste, Rafael; Holcman, David

    2018-03-07

    Most synaptic excitatory connections are made on dendritic spines. But how the voltage in spines is modulated by its geometry remains unclear. To investigate the electrical properties of spines, we combine voltage imaging data with electro-diffusion modeling. We first present a temporal deconvolution procedure for the genetically encoded voltage sensor expressed in hippocampal cultured neurons and then use electro-diffusion theory to compute the electric field and the current-voltage conversion. We extract a range for the neck resistances of 〈R〉=100±35MΩ. When a significant current is injected in a spine, the neck resistance can be inversely proportional to its radius, but not to the radius square, as predicted by Ohm's law. We conclude that the postsynaptic voltage cannot only be modulated by changing the number of receptors, but also by the spine geometry. Thus, spine morphology could be a key component in determining synaptic transduction and plasticity. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Noise Attenuation Estimation for Maximum Length Sequences in Deconvolution Process of Auditory Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Xian Peng

    2017-01-01

    Full Text Available The use of maximum length sequence (m-sequence has been found beneficial for recovering both linear and nonlinear components at rapid stimulation. Since m-sequence is fully characterized by a primitive polynomial of different orders, the selection of polynomial order can be problematic in practice. Usually, the m-sequence is repetitively delivered in a looped fashion. Ensemble averaging is carried out as the first step and followed by the cross-correlation analysis to deconvolve linear/nonlinear responses. According to the classical noise reduction property based on additive noise model, theoretical equations have been derived in measuring noise attenuation ratios (NARs after the averaging and correlation processes in the present study. A computer simulation experiment was conducted to test the derived equations, and a nonlinear deconvolution experiment was also conducted using order 7 and 9 m-sequences to address this issue with real data. Both theoretical and experimental results show that the NAR is essentially independent of the m-sequence order and is decided by the total length of valid data, as well as stimulation rate. The present study offers a guideline for m-sequence selections, which can be used to estimate required recording time and signal-to-noise ratio in designing m-sequence experiments.

  13. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  14. Digital high-pass filter deconvolution by means of an infinite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Födisch, P., E-mail: p.foedisch@hzdr.de [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Wohsmann, J. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Lange, B. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Schönherr, J. [Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany); Enghardt, W. [OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstr. 74, PF 41, 01307 Dresden (Germany); Helmholtz-Zentrum Dresden - Rossendorf, Institute of Radiooncology, Bautzner Landstr. 400, 01328 Dresden (Germany); German Cancer Consortium (DKTK) and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Kaever, P. [Helmholtz-Zentrum Dresden - Rossendorf, Department of Research Technology, Bautzner Landstr. 400, 01328 Dresden (Germany); Dresden University of Applied Sciences, Faculty of Electrical Engineering, Friedrich-List-Platz 1, 01069 Dresden (Germany)

    2016-09-11

    In the application of semiconductor detectors, the charge-sensitive amplifier is widely used in front-end electronics. The output signal is shaped by a typical exponential decay. Depending on the feedback network, this type of front-end electronics suffers from the ballistic deficit problem, or an increased rate of pulse pile-ups. Moreover, spectroscopy applications require a correction of the pulse-height, while a shortened pulse-width is desirable for high-throughput applications. For both objectives, digital deconvolution of the exponential decay is convenient. With a general method and the signals of our custom charge-sensitive amplifier for cadmium zinc telluride detectors, we show how the transfer function of an amplifier is adapted to an infinite impulse response (IIR) filter. This paper investigates different design methods for an IIR filter in the discrete-time domain and verifies the obtained filter coefficients with respect to the equivalent continuous-time frequency response. Finally, the exponential decay is shaped to a step-like output signal that is exploited by a forward-looking pulse processing.

  15. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    Science.gov (United States)

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  16. The use of deconvolution techniques to identify the fundamental mixing characteristics of urban drainage structures.

    Science.gov (United States)

    Stovin, V R; Guymer, I; Chappell, M J; Hattersley, J G

    2010-01-01

    Mixing and dispersion processes affect the timing and concentration of contaminants transported within urban drainage systems. Hence, methods of characterising the mixing effects of specific hydraulic structures are of interest to drainage network modellers. Previous research, focusing on surcharged manholes, utilised the first-order Advection-Dispersion Equation (ADE) and Aggregated Dead Zone (ADZ) models to characterise dispersion. However, although systematic variations in travel time as a function of discharge and surcharge depth have been identified, the first order ADE and ADZ models do not provide particularly good fits to observed manhole data, which means that the derived parameter values are not independent of the upstream temporal concentration profile. An alternative, more robust, approach utilises the system's Cumulative Residence Time Distribution (CRTD), and the solute transport characteristics of a surcharged manhole have been shown to be characterised by just two dimensionless CRTDs, one for pre- and the other for post-threshold surcharge depths. Although CRTDs corresponding to instantaneous upstream injections can easily be generated using Computational Fluid Dynamics (CFD) models, the identification of CRTD characteristics from non-instantaneous and noisy laboratory data sets has been hampered by practical difficulties. This paper shows how a deconvolution approach derived from systems theory may be applied to identify the CRTDs associated with urban drainage structures.

  17. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  18. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    Science.gov (United States)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Jürg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2011-06-01

    Seismic interferometry, also known as Green's function retrieval by crosscorrelation, has a wide range of applications, ranging from surface-wave tomography using ambient noise, to creating virtual sources for improved reflection seismology. Despite its successful applications, the crosscorrelation approach also has its limitations. The main underlying assumptions are that the medium is lossless and that the wavefield is equipartitioned. These assumptions are in practice often violated: the medium of interest is often illuminated from one side only, the sources may be irregularly distributed, and losses may be significant. These limitations may partly be overcome by reformulating seismic interferometry as a multidimensional deconvolution (MDD) process. We present a systematic analysis of seismic interferometry by crosscorrelation and by MDD. We show that for the non-ideal situations mentioned above, the correlation function is proportional to a Green's function with a blurred source. The source blurring is quantified by a so-called interferometric point-spread function which, like the correlation function, can be derived from the observed data (i.e. without the need to know the sources and the medium). The source of the Green's function obtained by the correlation method can be deblurred by deconvolving the correlation function for the point-spread function. This is the essence of seismic interferometry by MDD. We illustrate the crosscorrelation and MDD methods for controlled-source and passive-data applications with numerical examples and discuss the advantages and limitations of both methods.

  19. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  20. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    International Nuclear Information System (INIS)

    Jimenez-Ruiz, A.; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R.

    2017-01-01

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  1. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    Science.gov (United States)

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  2. INTRAVAL project phase 2. Analysis of STRIPA 3D data by a deconvolution technique

    International Nuclear Information System (INIS)

    Ilvonen, M.; Hautojaervi, A.; Paatero, P.

    1994-09-01

    The data analysed in this report were obtained in tracer experiments performed from a specially excavated drift in good granite rock at the level of 360 m below the ground in the Stripa mine. Tracer transport paths from the injection points to the collecting sheets at the tunnel walls were tens of meters long. Data for six tracers that arrived in measurable concentrations were elaborated by different means of data analysis to reveal the transport behaviour of solutes in the rock fractures. Techniques like direct inversion of the data, Fourier analysis, Singular Value Decomposition (SVD) and non-negative least squares fitting (NNLS) were employed. A newly developed code based on a general-purpose approach for solving deconvolution-type or integral equation problems, Extreme Value Estimation (EVE), proved to be a very helpful tool in deconvolving impulse responses from the injection flow rates and break-through curves of tracers and assessing the physical confidence of the results. (23 refs., 33 figs.)

  3. Understanding AuNP interaction with low-generation PAMAM dendrimers: a CIELab and deconvolution study

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez-Ruiz, A., E-mail: ailjimrui@alum.us.es; Carnerero, J. M.; Castillo, P. M.; Prado-Gotor, R., E-mail: pradogotor@us.es [University of Seville, The Department of Physical Chemistry (Spain)

    2017-01-15

    Low-generation polyamidoamine (PAMAM) dendrimers are known to adsorb on the surface of gold nanoparticles (AuNPs) causing aggregation and color changes. In this paper, a thorough study of this affinity using absorption spectroscopy, colorimetric, and emission methods has been carried out. Results show that, for citrate-capped gold nanoparticles, interaction with the dendrimer is not only of an electrostatic character but instead occurs, at least in part, through the dendrimer’s uncharged internal amino groups. The possibilities of the CIELab chromaticity system parameters’ evolution have also been explored in order to quantify dendrimer interaction with the red-colored nanoparticles. By measuring and quantifying 17 nm citrate-capped AuNP color changes, which are strongly dependant on their aggregation state, binding free energies are obtained for the first time for these systems. Results are confirmed via an alternate fitting method which makes use of deconvolution parameters from absorbance spectra. Binding free energies obtained through the use of both means are in good agreement with each other.

  4. Analysis of the deconvolution of the thermoluminescent curve of the zirconium oxide doped with graphite

    International Nuclear Information System (INIS)

    Salas C, P.; Estrada G, R.; Gonzalez M, P.R.; Mendoza A, D.

    2003-01-01

    In this work, we present a mathematical analysis of the behavior of the thermoluminescent curve (Tl) induced by gamma radiation in samples made of zirconium oxide doped with different amounts of graphite. In accordance with the results gamma radiation induces a Tl curve with two maximum of emission localized in the temperatures at 139 and 250 C, the area under the curve is increasing as a function of the time of exposition to the radiation. The analysis of curve deconvolution, in accordance with the theory which indicates that this behavior must be obey a Boltzmann distribution, we found that each one of them has a different growth velocity as the time of exposition increase. In the same way, we observed that after the irradiation was suspended each one of the maximum decrease with different velocity. The behaviour observed in the samples is very interesting because the zirconium oxide has attracted the interest of many research groups, this material has demonstrated to have many applications in thermoluminescent dosimetry and it can be used in the quantification of radiation. (Author)

  5. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    International Nuclear Information System (INIS)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.; Hamburg Univ.

    1981-01-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and 47 Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. 47 Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of 47 Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P 47 Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations. (orig.)

  6. Exclusion statistics and integrable models

    International Nuclear Information System (INIS)

    Mashkevich, S.

    1998-01-01

    The definition of exclusion statistics that was given by Haldane admits a 'statistical interaction' between distinguishable particles (multispecies statistics). For such statistics, thermodynamic quantities can be evaluated exactly; explicit expressions are presented here for cluster coefficients. Furthermore, single-species exclusion statistics is realized in one-dimensional integrable models of the Calogero-Sutherland type. The interesting questions of generalizing this correspondence to the higher-dimensional and the multispecies cases remain essentially open; however, our results provide some hints as to searches for the models in question

  7. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  8. Energy statistics

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    World data from the United Nation's latest Energy Statistics Yearbook, first published in our last issue, are completed here. The 1984-86 data were revised and 1987 data added for world commercial energy production and consumption, world natural gas plant liquids production, world LP-gas production, imports, exports, and consumption, world residual fuel oil production, imports, exports, and consumption, world lignite production, imports, exports, and consumption, world peat production and consumption, world electricity production, imports, exports, and consumption (Table 80), and world nuclear electric power production

  9. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    Directory of Open Access Journals (Sweden)

    Roerdink Jos BTM

    2008-04-01

    Full Text Available Abstract Background We present a simple, data-driven method to extract haemodynamic response functions (HRF from functional magnetic resonance imaging (fMRI time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD technique. HRF data are required for many fMRI applications, such as defining region-specific HRFs, effciently representing a general HRF, or comparing subject-specific HRFs. Results ForWaRD is applied to fMRI time signals, after removing low-frequency trends by a wavelet-based method, and the output of ForWaRD is a time series of volumes, containing the HRF in each voxel. Compared to more complex methods, this extraction algorithm requires few assumptions (separability of signal and noise in the frequency and wavelet domains and the general linear model and it is fast (HRF extraction from a single fMRI data set takes about the same time as spatial resampling. The extraction method is tested on simulated event-related activation signals, contaminated with noise from a time series of real MRI images. An application for HRF data is demonstrated in a simple event-related experiment: data are extracted from a region with significant effects of interest in a first time series. A continuous-time HRF is obtained by fitting a nonlinear function to the discrete HRF coeffcients, and is then used to analyse a later time series. Conclusion With the parameters used in this paper, the extraction method presented here is very robust to changes in signal properties. Comparison of analyses with fitted HRFs and with a canonical HRF shows that a subject-specific, regional HRF significantly improves detection power. Sensitivity and specificity increase not only in the region from which the HRFs are extracted, but also in other regions of interest.

  10. Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution.

    Directory of Open Access Journals (Sweden)

    J M Portegies

    Full Text Available We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI data via constrained spherical deconvolution (CSD. Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC, a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning.

  11. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  12. Improved Peak Detection and Deconvolution of Native Electrospray Mass Spectra from Large Protein Complexes.

    Science.gov (United States)

    Lu, Jonathan; Trnka, Michael J; Roh, Soung-Hun; Robinson, Philip J J; Shiau, Carrie; Fujimori, Danica Galonic; Chiu, Wah; Burlingame, Alma L; Guan, Shenheng

    2015-12-01

    Native electrospray-ionization mass spectrometry (native MS) measures biomolecules under conditions that preserve most aspects of protein tertiary and quaternary structure, enabling direct characterization of large intact protein assemblies. However, native spectra derived from these assemblies are often partially obscured by low signal-to-noise as well as broad peak shapes because of residual solvation and adduction after the electrospray process. The wide peak widths together with the fact that sequential charge state series from highly charged ions are closely spaced means that native spectra containing multiple species often suffer from high degrees of peak overlap or else contain highly interleaved charge envelopes. This situation presents a challenge for peak detection, correct charge state and charge envelope assignment, and ultimately extraction of the relevant underlying mass values of the noncovalent assemblages being investigated. In this report, we describe a comprehensive algorithm developed for addressing peak detection, peak overlap, and charge state assignment in native mass spectra, called PeakSeeker. Overlapped peaks are detected by examination of the second derivative of the raw mass spectrum. Charge state distributions of the molecular species are determined by fitting linear combinations of charge envelopes to the overall experimental mass spectrum. This software is capable of deconvoluting heterogeneous, complex, and noisy native mass spectra of large protein assemblies as demonstrated by analysis of (1) synthetic mononucleosomes containing severely overlapping peaks, (2) an RNA polymerase II/α-amanitin complex with many closely interleaved ion signals, and (3) human TriC complex containing high levels of background noise. Graphical Abstract ᅟ.

  13. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  14. Stupid statistics!

    Science.gov (United States)

    Tellinghuisen, Joel

    2008-01-01

    The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

  15. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  16. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    Energy Technology Data Exchange (ETDEWEB)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com [Department of Physics, University of Pune, Ganeshkhind, Pune -411007, India and Department of Physics, Fergusson College, Pune- 411004 (India); Patil, B. J.; Bhoraskar, V. N.; Dhole, S. D. [Department of Physics, University of Pune, Ganeshkhind, Pune -411007 (India); Sahare, P. D. [Department of Physics and Astrophysics, University of Delhi, Delhi- 110007 (India)

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  17. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    International Nuclear Information System (INIS)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Gibb, Andrew G.; Halpern, Mark; Marsden, Gaelen; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; France, Kevin; Gundersen, Joshua O.; Hughes, David H.; Martin, Peter G.; Olmi, Luca

    2011-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12 CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting

  18. Digital sorting of complex tissues for cell type-specific gene expression profiles.

    Science.gov (United States)

    Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong

    2013-03-07

    Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.

  19. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  20. Isotropic non-white matter partial volume effects in constrained spherical deconvolution

    Directory of Open Access Journals (Sweden)

    Timo eRoine

    2014-03-01

    Full Text Available Diffusion-weighted (DW magnetic resonance imaging (MRI is a noninvasive imaging method, which can be used to investigate neural tracts in the white matter (WM of the brain. Significant partial volume effects (PVE are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM and cerebrospinal fluid (CSF, and by multiple nonparallel WM fiber populations. High angular resolution diffusion imaging (HARDI methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD. Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNR, fiber configurations, and tissue fractions.Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50 % of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50 % GM volume for maximum spherical harmonics orders of 8 and below, and already with 25 % GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm2, reasonable SNR (~30 and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs

  1. Isotropic non-white matter partial volume effects in constrained spherical deconvolution.

    Science.gov (United States)

    Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan

    2014-01-01

    Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.

  2. Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique

    Science.gov (United States)

    Järvinen, S. P.; Berdyugina, S. V.

    2010-10-01

    Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which

  3. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  4. Hydrophobicity of silica thin films: The deconvolution and interpretation by Fourier-transform infrared spectroscopy

    Science.gov (United States)

    Saputra, Riza Eka; Astuti, Yayuk; Darmawan, Adi

    2018-06-01

    This work investigated the synthesis of dimethoxydimethylsilane:tetraethoxysilane (DMDMS:TEOS) silica thin films as well as the effect of DMDMS:TEOS molar ratios and calcination temperature on hydrophobic properties of silica thin films and its correlation with the FTIR spectra behaviour. The silica thin films were synthesized by sol-gel method using combination of DMDMS and TEOS as silica precursors, ethanol as solvent and ammonia as catalyst, with DMDMS and TEOS molar ratio of 10:90, 25:75, 50:50, 75:25 and 90:10. The results showed that DMDMS:TEOS molar ratio had significant impact on the hydrophobic properties of silica thin films coated on a glass surface. Furthermore, the correlation between water contact angle (WCA) and DMDMS:TEOS molar ratio was found to be in a parabolic shape. Concurrently, the maximum apex of the parabola obtained was observed on the DMDMS:TEOS molar ratio of 50:50 for all calcination temperature. It was clearly observed that the silica xerogel exhibiting notable change in relative peak intensities showed FTIR peak splitting of υasymmetric Si-O-Si. To uncover what happened at the FTIR peak, the deconvolution was conducted in Gaussian approach. It was established that the changes in the Gaussian peak component were related to DMDMS:TEOS molar ratios and the calcination temperature that allowed us to tailor the DMDMS:TEOS silica polymer structure model based on the peak intensity ratios. With the increase of DMDMS:TEOS molar ratio, the ratio of (cyclic Si-O-Si)/(linear Si-O-Si) decreased, whilst the ratio of (C-H)/(linear Si-O-Si) increased. Both ratios intersected at DMDMS:TEOS molar ratio of 50:50 with contribution factor ratio of 1:16 and 1:50 for silica xerogel calcined at 300 °C and 500 °C respectively. The importance of this research is the DMDMS:TEOS molar ratio plays an important role in determining the hydrophobic properties of thin films.

  5. Hydrophobicity of silica thin films: The deconvolution and interpretation by Fourier-transform infrared spectroscopy.

    Science.gov (United States)

    Saputra, Riza Eka; Astuti, Yayuk; Darmawan, Adi

    2018-03-14

    This work investigated the synthesis of dimethoxydimethylsilane:tetraethoxysilane (DMDMS:TEOS) silica thin films as well as the effect of DMDMS:TEOS molar ratios and calcination temperature on hydrophobic properties of silica thin films and its correlation with the FTIR spectra behaviour. The silica thin films were synthesized by sol-gel method using combination of DMDMS and TEOS as silica precursors, ethanol as solvent and ammonia as catalyst, with DMDMS and TEOS molar ratio of 10:90, 25:75, 50:50, 75:25 and 90:10. The results showed that DMDMS:TEOS molar ratio had significant impact on the hydrophobic properties of silica thin films coated on a glass surface. Furthermore, the correlation between water contact angle (WCA) and DMDMS:TEOS molar ratio was found to be in a parabolic shape. Concurrently, the maximum apex of the parabola obtained was observed on the DMDMS:TEOS molar ratio of 50:50 for all calcination temperature. It was clearly observed that the silica xerogel exhibiting notable change in relative peak intensities showed FTIR peak splitting of υ asymmetric Si-O-Si. To uncover what happened at the FTIR peak, the deconvolution was conducted in Gaussian approach. It was established that the changes in the Gaussian peak component were related to DMDMS:TEOS molar ratios and the calcination temperature that allowed us to tailor the DMDMS:TEOS silica polymer structure model based on the peak intensity ratios. With the increase of DMDMS:TEOS molar ratio, the ratio of (cyclic Si-O-Si)/(linear Si-O-Si) decreased, whilst the ratio of (C-H)/(linear Si-O-Si) increased. Both ratios intersected at DMDMS:TEOS molar ratio of 50:50 with contribution factor ratio of 1:16 and 1:50 for silica xerogel calcined at 300°C and 500°C respectively. The importance of this research is the DMDMS:TEOS molar ratio plays an important role in determining the hydrophobic properties of thin films. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Analysis of low-pass filters for approximate deconvolution closure modelling in one-dimensional decaying Burgers turbulence

    Science.gov (United States)

    San, O.

    2016-01-01

    The idea of spatial filtering is central in approximate deconvolution large-eddy simulation (AD-LES) of turbulent flows. The need for low-pass filters naturally arises in the approximate deconvolution approach which is based solely on mathematical approximations by employing repeated filtering operators. Two families of low-pass spatial filters are studied in this paper: the Butterworth filters and the Padé filters. With a selection of various filtering parameters, variants of the AD-LES are systematically applied to the decaying Burgers turbulence problem, which is a standard prototype for more complex turbulent flows. Comparing with the direct numerical simulations, it is shown that all forms of the AD-LES approaches predict significantly better results than the under-resolved simulations at the same grid resolution. However, the results highly depend on the selection of the filtering procedure and the filter design. It is concluded that a complete attenuation for the smallest scales is crucial to prevent energy accumulation at the grid cut-off.

  7. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    Science.gov (United States)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  8. A deconvolution method for deriving the transit time spectrum for ultrasound propagation through cancellous bone replica models.

    Science.gov (United States)

    Langton, Christian M; Wille, Marie-Luise; Flegg, Mark B

    2014-04-01

    The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland-Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.

  9. Statistical sum of bosonic string, compactified on an orbifold

    International Nuclear Information System (INIS)

    Morozov, A.; Ol'shanetskij, M.

    1986-01-01

    Expression for statistical sum of bosonic string, compactified on a singular orbifold, is presented. All the information about the orbifold is encoded the specific combination of theta-functions, which the statistical sum is expressed through

  10. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    Science.gov (United States)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  11. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    Science.gov (United States)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source

  12. Childhood Cancer Statistics

    Science.gov (United States)

    ... Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses Incidence Rates ...

  13. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    Science.gov (United States)

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  14. Bayesian Blind Separation and Deconvolution of Dynamic Image Sequences Using Sparsity Priors

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2015-01-01

    Roč. 34, č. 1 (2015), s. 258-266 ISSN 0278-0062 R&D Projects: GA ČR GA13-29225S Keywords : Functional imaging * Blind source separation * Computer-aided detection and diagnosis * Probabilistic and statistical methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.756, year: 2015 http://library.utia.cas.cz/separaty/2014/AS/tichy-0431090.pdf

  15. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  16. State Transportation Statistics 2014

    Science.gov (United States)

    2014-12-15

    The Bureau of Transportation Statistics (BTS) presents State Transportation Statistics 2014, a statistical profile of transportation in the 50 states and the District of Columbia. This is the 12th annual edition of State Transportation Statistics, a ...

  17. A simple method for the deconvolution of 134 Cs/137 Cs peaks in gamma-ray scintillation spectrometry

    International Nuclear Information System (INIS)

    Darko, E.O.; Osae, E.K.; Schandorf, C.

    1998-01-01

    A simple method for the deconvolution of 134 Cs / 137 Cs peaks in a given mixture of 134 Cs and 137 Cs using Nal(TI) gamma-ray scintillation spectrometry is described. In this method the 795 keV energy of 134 Cs is used as a reference peak to calculate the activity of the 137 Cs directly from the measured peaks. Certified reference materials were measured using the method and compared with a high resolution gamma-ray spectrometry measurements. The results showed good agreement with the certified values. The method is very simple and does not need any complicated mathematics and computer programme to de- convolute the overlapping 604.7 keV and 661.6 keV peaks of 134 Cs and 137 Cs respectively. (author). 14 refs.; 1 tab., 2 figs

  18. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  19. Anisotropic strain in YBa2Cu3O7-δ films analysed by deconvolution of two-dimensional intensity data

    International Nuclear Information System (INIS)

    Broetz, J.; Fuess, H.

    2001-01-01

    The influence of the instrumental resolution on two-dimensional reflection profiles of epitaxic YBa 2 Cu 3 O 7-δ films on SrTiO 3 (001) has been studied in order to investigate the strain in the superconducting films. The X-ray diffraction intensity data were obtained by two-dimensional scans in reciprocal space (q-scan). Since the reflection broadening caused by the apparatus differs for each position in reciprocal space, a highly crystalline substrate was used as a standard. Thus it was possible to measure a standard very close to the YBa 2 Cu 3 O 7-δ reflections in reciprocal space. The two-dimensional deconvolution of reflections by a new computer program revealed an anisotropic strain of the two twinning systems of the film. (orig.)

  20. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    Science.gov (United States)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  1. The mathematics of a successful deconvolution: a quantitative assessment of mixture-based combinatorial libraries screened against two formylpeptide receptors.

    Science.gov (United States)

    Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia

    2013-05-30

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  2. Response function during oxygen sputter profiling and its application to deconvolution of ultrashallow B depth profiles in Si

    International Nuclear Information System (INIS)

    Shao Lin; Liu Jiarui; Wang Chong; Ma, Ki B.; Zhang Jianming; Chen, John; Tang, Daniel; Patel, Sanjay; Chu Weikan

    2003-01-01

    The secondary ion mass spectrometry (SIMS) response function to a B 'δ surface layer' has been investigated. Using electron-gun evaporation combined with liquid nitrogen cooling of target, we are able to deposit an ultrathin B layer without detectable island formation. The B spatial distribution obtained from SIMS is exponentially decaying with a decay length approximately a linear function of the incident energy of the oxygen during the SIMS analysis. Deconvolution with the response function has been applied to reconstruct the spatial distribution of ultra-low-energy B implants. A correction to depth and yield scales due to transient sputtering near the Si surface region was also applied. Transient erosion shifts the profile shallower, but beam mixing shifts it deeper. These mutually compensating effects make the adjusted distribution almost the same as original data. The one significant difference is a buried B peak observed near the surface region

  3. Tracking juniper berry content in oils and distillates by spectral deconvolution of gas chromatography/mass spectrometry data.

    Science.gov (United States)

    Robbat, Albert; Kowalsick, Amanda; Howell, Jessalin

    2011-08-12

    The complex nature of botanicals and essential oils makes it difficult to identify all of the constituents by gas chromatography/mass spectrometry (GC/MS) alone. In this paper, automated sequential, multidimensional gas chromatography/mass spectrometry (GC-GC/MS) was used to obtain a matrix-specific, retention time/mass spectrometry library of 190 juniper berry oil compounds. GC/MS analysis on stationary phases with different polarities confirmed the identities of each compound when spectral deconvolution software was used to analyze the oil. Also analyzed were distillates of juniper berry and its oil as well as gin from four different manufacturers. Findings showed the chemical content of juniper berry can be traced from starting material to final product and can be used to authenticate and differentiate brands. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    International Nuclear Information System (INIS)

    Geloni, G.; Saldin, E.L.; Schneidmiller, E.A.; Yurkov, M.V.

    2004-01-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function

  5. The reactions of neutral iron clusters with D2O: Deconvolution of equilibrium constants from multiphoton processes

    International Nuclear Information System (INIS)

    Weiller, B.H.; Bechthold, P.S.; Parks, E.K.; Pobo, L.G.; Riley, S.J.

    1989-01-01

    The chemical reactions of neutral iron clusters with D 2 O are studied in a continuous flow tube reactor by molecular beam sampling and time-of-flight mass spectrometry with laser photoionization. Product distributions are invariant to a four-fold change in reaction time demonstrating that equilibrium is attained between free and adsorbed D 2 O. The observed negative temperature dependence is consistent with an exothermic, molecular addition reaction at equilibrium. Under our experimental conditions, there is significant photodesorption of D 2 O (Fe/sub n/(D 2 O)/sub m/ + hν → Fe/sub n/ + m D 2 O) along with ionization due to absorption of multiple photons from the ionizing laser. Using a simple model based on a rate equation analysis, we are able to quantitatively deconvolute this desorption process from the equilibrium constants. 8 refs., 1 fig

  6. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    Science.gov (United States)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  7. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    International Nuclear Information System (INIS)

    Greenberg, M.; Ebel, D.S.

    2009-01-01

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of ∼15 (micro)m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 (micro)m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  8. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    Energy Technology Data Exchange (ETDEWEB)

    Bade, Richard [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Causanilles, Ana; Emke, Erik [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix [Research Institute for Pesticides and Water, University Jaume I, Avda. Sos Baynat s/n, E-12071 Castellón (Spain); Voogt, Pim de, E-mail: w.p.devoogt@uva.nl [KWR Watercycle Research Institute, Chemical Water Quality and Health, P.O. Box 1072, 3430 BB Nieuwegein (Netherlands); Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam (Netherlands)

    2016-11-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually.

  9. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    International Nuclear Information System (INIS)

    Bade, Richard; Causanilles, Ana; Emke, Erik; Bijlsma, Lubertus; Sancho, Juan V.; Hernandez, Felix; Voogt, Pim de

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of > 200 pharmaceuticals and illicit drugs or ChemSpider. This hidden target screening approach led to the detection of numerous compounds including the illicit drug cocaine and its metabolite benzoylecgonine and the pharmaceuticals carbamazepine, gemfibrozil and losartan. The compounds found using both approaches were combined, and isotopic pattern and retention time prediction were used to filter out false positives. The remaining potential positives were reanalysed in MS/MS mode and their product ions were compared with literature and/or mass spectral libraries. The inclusion of the chemical database ChemSpider led to the tentative identification of several metabolites, including paraxanthine, theobromine, theophylline and carboxylosartan, as well as the pharmaceutical phenazone. The first three of these compounds are isomers and they were subsequently distinguished based on their product ions and predicted retention times. This work has shown that the use deconvolution tools facilitates non-target screening and enables the identification of a higher number of compounds. - Highlights: • A hidden target non-target screening method is utilised using two databases • Two software (MsXelerator and Sieve 2.1) used for both methods • 22 compounds tentatively identified following MS/MS reinjection • More information gleaned from this combined approach than individually

  10. Deconvolution analysis of 24-h serum cortisol profiles informs the amount and distribution of hydrocortisone replacement therapy.

    Science.gov (United States)

    Peters, Catherine J; Hill, Nathan; Dattani, Mehul T; Charmandari, Evangelia; Matthews, David R; Hindmarsh, Peter C

    2013-03-01

    Hydrocortisone therapy is based on a dosing regimen derived from estimates of cortisol secretion, but little is known of how the dose should be distributed throughout the 24 h. We have used deconvolution analysis of 24-h serum cortisol profiles to determine 24-h cortisol secretion and distribution to inform hydrocortisone dosing schedules in young children and older adults. Twenty four hour serum cortisol profiles from 80 adults (41 men, aged 60-74 years) and 29 children (24 boys, aged 5-9 years) were subject to deconvolution analysis using an 80-min half-life to ascertain total cortisol secretion and distribution throughout the 24-h period. Mean daily cortisol secretion was similar between adults (6.3 mg/m(2) body surface area/day, range 5.1-9.3) and children (8.0 mg/m(2) body surface area/day, range 5.3-12.0). Peak serum cortisol concentration was higher in children compared with adults, whereas nadir serum cortisol concentrations were similar. Timing of the peak serum cortisol concentration was similar (07.05-07.25), whereas that of the nadir concentration occurred later in adults (midnight) compared with children (22.48) (P = 0.003). Children had the highest percentage of cortisol secretion between 06.00 and 12.00 (38.4%), whereas in adults this took place between midnight and 06.00 (45.2%). These observations suggest that the daily hydrocortisone replacement dose should be equivalent on average to 6.3 mg/m(2) body surface area/day in adults and 8.0 mg/m(2) body surface area/day in children. Differences in distribution of the total daily dose between older adults and young children need to be taken into account when using a three or four times per day dosing regimen. © 2012 Blackwell Publishing Ltd.

  11. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  12. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  13. Semiclassical statistical mechanics

    International Nuclear Information System (INIS)

    Stratt, R.M.

    1979-04-01

    On the basis of an approach devised by Miller, a formalism is developed which allows the nonperturbative incorporation of quantum effects into equilibrium classical statistical mechanics. The resulting expressions bear a close similarity to classical phase space integrals and, therefore, are easily molded into forms suitable for examining a wide variety of problems. As a demonstration of this, three such problems are briefly considered: the simple harmonic oscillator, the vibrational state distribution of HCl, and the density-independent radial distribution function of He 4 . A more detailed study is then made of two more general applications involving the statistical mechanics of nonanalytic potentials and of fluids. The former, which is a particularly difficult problem for perturbative schemes, is treated with only limited success by restricting phase space and by adding an effective potential. The problem of fluids, however, is readily found to yield to a semiclassical pairwise interaction approximation, which in turn permits any classical many-body model to be expressed in a convenient form. The remainder of the discussion concentrates on some ramifications of having a phase space version of quantum mechanics. To test the breadth of the formulation, the task of constructing quantal ensemble averages of phase space functions is undertaken, and in the process several limitations of the formalism are revealed. A rather different approach is also pursued. The concept of quantum mechanical ergodicity is examined through the use of numerically evaluated eigenstates of the Barbanis potential, and the existence of this quantal ergodicity - normally associated with classical phase space - is verified. 21 figures, 4 tables

  14. Isotopic safeguards statistics

    International Nuclear Information System (INIS)

    Timmerman, C.L.; Stewart, K.B.

    1978-06-01

    The methods and results of our statistical analysis of isotopic data using isotopic safeguards techniques are illustrated using example data from the Yankee Rowe reactor. The statistical methods used in this analysis are the paired comparison and the regression analyses. A paired comparison results when a sample from a batch is analyzed by two different laboratories. Paired comparison techniques can be used with regression analysis to detect and identify outlier batches. The second analysis tool, linear regression, involves comparing various regression approaches. These approaches use two basic types of models: the intercept model (y = α + βx) and the initial point model [y - y 0 = β(x - x 0 )]. The intercept model fits strictly the exposure or burnup values of isotopic functions, while the initial point model utilizes the exposure values plus the initial or fabricator's data values in the regression analysis. Two fitting methods are applied to each of these models. These methods are: (1) the usual least squares fitting approach where x is measured without error, and (2) Deming's approach which uses the variance estimates obtained from the paired comparison results and considers x and y are both measured with error. The Yankee Rowe data were first measured by Nuclear Fuel Services (NFS) and remeasured by Nuclear Audit and Testing Company (NATCO). The ratio of Pu/U versus 235 D (in which 235 D is the amount of depleted 235 U expressed in weight percent) using actual numbers is the isotopic function illustrated. Statistical results using the Yankee Rowe data indicates the attractiveness of Deming's regression model over the usual approach by simple comparison of the given regression variances with the random variance from the paired comparison results

  15. Dynamic statistical information theory

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In recent years we extended Shannon static statistical information theory to dynamic processes and established a Shannon dynamic statistical information theory, whose core is the evolution law of dynamic entropy and dynamic information. We also proposed a corresponding Boltzmman dynamic statistical information theory. Based on the fact that the state variable evolution equation of respective dynamic systems, i.e. Fokker-Planck equation and Liouville diffusion equation can be regarded as their information symbol evolution equation, we derived the nonlinear evolution equations of Shannon dynamic entropy density and dynamic information density and the nonlinear evolution equations of Boltzmann dynamic entropy density and dynamic information density, that describe respectively the evolution law of dynamic entropy and dynamic information. The evolution equations of these two kinds of dynamic entropies and dynamic informations show in unison that the time rate of change of dynamic entropy densities is caused by their drift, diffusion and production in state variable space inside the systems and coordinate space in the transmission processes; and that the time rate of change of dynamic information densities originates from their drift, diffusion and dissipation in state variable space inside the systems and coordinate space in the transmission processes. Entropy and information have been combined with the state and its law of motion of the systems. Furthermore we presented the formulas of two kinds of entropy production rates and information dissipation rates, the expressions of two kinds of drift information flows and diffusion information flows. We proved that two kinds of information dissipation rates (or the decrease rates of the total information) were equal to their corresponding entropy production rates (or the increase rates of the total entropy) in the same dynamic system. We obtained the formulas of two kinds of dynamic mutual informations and dynamic channel

  16. The foundations of statistics

    CERN Document Server

    Savage, Leonard J

    1972-01-01

    Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

  17. State Transportation Statistics 2010

    Science.gov (United States)

    2011-09-14

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2010, a statistical profile of transportation in the 50 states and the District of Col...

  18. State Transportation Statistics 2012

    Science.gov (United States)

    2013-08-15

    The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportation's (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2012, a statistical profile of transportation ...

  19. Adrenal Gland Tumors: Statistics

    Science.gov (United States)

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  20. State transportation statistics 2009

    Science.gov (United States)

    2009-01-01

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and : Innovative Technology Administration (RITA), presents State Transportation : Statistics 2009, a statistical profile of transportation in the 50 states and the : District ...

  1. State Transportation Statistics 2011

    Science.gov (United States)

    2012-08-08

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2011, a statistical profile of transportation in the 50 states and the District of Col...

  2. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 01/ ... the body. It is important to remember that statistics on the survival rates for people with a ...

  3. State Transportation Statistics 2013

    Science.gov (United States)

    2014-09-19

    The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportations (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2013, a statistical profile of transportatio...

  4. BTS statistical standards manual

    Science.gov (United States)

    2005-10-01

    The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

  5. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    Science.gov (United States)

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Deconvolution of Thermal Emissivity Spectra of Mercury to their Endmember Counterparts measured in Simulated Mercury Surface Conditions

    Science.gov (United States)

    Varatharajan, I.; D'Amore, M.; Maturilli, A.; Helbert, J.; Hiesinger, H.

    2017-12-01

    The Mercury Radiometer and Thermal Imaging Spectrometer (MERTIS) payload of ESA/JAXA Bepicolombo mission to Mercury will map the thermal emissivity at wavelength range of 7-14 μm and spatial resolution of 500 m/pixel [1]. Mercury was also imaged at the same wavelength range using the Boston University's Mid-Infrared Spectrometer and Imager (MIRSI) mounted on the NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii with the minimum spatial coverage of 400-600km/spectra which blends all rocks, minerals, and soil types [2]. Therefore, the study [2] used quantitative deconvolution algorithm developed by [3] for spectral unmixing of this composite thermal emissivity spectrum from telescope to their respective areal fractions of endmember spectra; however, the thermal emissivity of endmembers used in [2] is the inverted reflectance measurements (Kirchhoff's law) of various samples measured at room temperature and pressure. Over a decade, the Planetary Spectroscopy Laboratory (PSL) at the Institute of Planetary Research (PF) at the German Aerospace Center (DLR) facilitates the thermal emissivity measurements under controlled and simulated surface conditions of Mercury by taking emissivity measurements at varying temperatures from 100-500°C under vacuum conditions supporting MERTIS payload. The measured thermal emissivity endmember spectral library therefore includes major silicates such as bytownite, anorthoclase, synthetic glass, olivine, enstatite, nepheline basanite, rocks like komatiite, tektite, Johnson Space Center lunar simulant (1A), and synthetic powdered sulfides which includes MgS, FeS, CaS, CrS, TiS, NaS, and MnS. Using such specialized endmember spectral library created under Mercury's conditions significantly increases the accuracy of the deconvolution model results. In this study, we revisited the available telescope spectra and redeveloped the algorithm by [3] by only choosing the endmember spectral library created at PSL for unbiased model

  7. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  8. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  9. Statistics in Schools

    Science.gov (United States)

    Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

  10. Transport Statistics - Transport - UNECE

    Science.gov (United States)

    Sustainable Energy Statistics Trade Transport Themes UNECE and the SDGs Climate Change Gender Ideas 4 Change UNECE Weekly Videos UNECE Transport Areas of Work Transport Statistics Transport Transport Statistics About us Terms of Reference Meetings and Events Meetings Working Party on Transport Statistics (WP.6

  11. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  12. Time, Temperature, and Cationic Dependence of Alkali Activation of Slag: Insights from Fourier Transform Infrared Spectroscopy and Spectral Deconvolution.

    Science.gov (United States)

    Dakhane, Akash; Madavarapu, Sateesh Babu; Marzke, Robert; Neithalath, Narayanan

    2017-08-01

    The use of waste/by-product materials, such as slag or fly ash, activated using alkaline agents to create binding materials for construction applications (in lieu of portland cement) is on the rise. The influence of activation parameters (SiO 2 to Na 2 O ratio or M s of the activator, Na 2 O to slag ratio or n, cation type K + or Na + ) on the process and extent of alkali activation of slag under ambient and elevated temperature curing, evaluated through spectroscopic techniques, is reported in this paper. Fourier transform infrared spectroscopy along with a Fourier self-deconvolution method is used. The major spectral band of interest lies in the wavenumber range of ∼950 cm -1 , corresponding to the antisymmetric stretching vibration of Si-O-T (T = Si or Al) bonds. The variation in the spectra with time from 6 h to 28 days is attributed to the incorporation of Al in the gel structure and the enhancement in degree of polymerization of the gel. 29 Si nuclear magnetic resonance spectroscopy is used to quantify the Al incorporation with time, which is found to be higher when Na silicate is used as the activator. The Si-O-T bond wavenumbers are also generally lower for the Na silicate activated systems.

  13. A Deconvolution Protocol for ChIP-Seq Reveals Analogous Enhancer Structures on the Mouse and Human Ribosomal RNA Genes

    Directory of Open Access Journals (Sweden)

    Jean-Clement Mars

    2018-01-01

    Full Text Available The combination of Chromatin Immunoprecipitation and Massively Parallel Sequencing, or ChIP-Seq, has greatly advanced our genome-wide understanding of chromatin and enhancer structures. However, its resolution at any given genetic locus is limited by several factors. In applying ChIP-Seq to the study of the ribosomal RNA genes, we found that a major limitation to resolution was imposed by the underlying variability in sequence coverage that very often dominates the protein–DNA interaction profiles. Here, we describe a simple numerical deconvolution approach that, in large part, corrects for this variability, and significantly improves both the resolution and quantitation of protein–DNA interaction maps deduced from ChIP-Seq data. This approach has allowed us to determine the in vivo organization of the RNA polymerase I preinitiation complexes that form at the promoters and enhancers of the mouse (Mus musculus and human (Homo sapiens ribosomal RNA genes, and to reveal a phased binding of the HMG-box factor UBF across the rDNA. The data identify and map a “Spacer Promoter” and associated stalled polymerase in the intergenic spacer of the human ribosomal RNA genes, and reveal a very similar enhancer structure to that found in rodents and lower vertebrates.

  14. High-spatial-resolution localization algorithm based on cascade deconvolution in a distributed Sagnac interferometer invasion monitoring system.

    Science.gov (United States)

    Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi

    2016-10-10

    In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106  s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.

  15. Concurrent and lagged impacts of an anomalously warm year on autotrophic and heterotrophic components of soil respiration: a deconvolution analysis.

    Science.gov (United States)

    Zhou, Xuhui; Luo, Yiqi; Gao, Chao; Verburg, Paul S J; Arnone, John A; Darrouzet-Nardi, Anthony; Schimel, David S

    2010-07-01

    *Partitioning soil respiration into autotrophic (R(A)) and heterotrophic (R(H)) components is critical for understanding their differential responses to climate warming. *Here, we used a deconvolution analysis to partition soil respiration in a pulse warming experiment. We first conducted a sensitivity analysis to determine which parameters can be identified by soil respiration data. A Markov chain Monte Carlo technique was then used to optimize those identifiable parameters in a terrestrial ecosystem model. Finally, the optimized parameters were employed to quantify R(A) and R(H) in a forward analysis. *Our results displayed that more than one-half of parameters were constrained by daily soil respiration data. The optimized model simulation showed that warming stimulated R(H) and had little effect on R(A) in the first 2 months, but decreased both R(H) and R(A) during the remainder of the treatment and post-treatment years. Clipping of above-ground biomass stimulated the warming effect on R(H) but not on R(A). Overall, warming decreased R(A) and R(H) significantly, by 28.9% and 24.9%, respectively, during the treatment year and by 27.3% and 33.3%, respectively, during the post-treatment year, largely as a result of decreased canopy greenness and biomass. *Lagged effects of climate anomalies on soil respiration and its components are important in assessing terrestrial carbon cycle feedbacks to climate warming.

  16. Analysis of the stability of the traps in LiF: Mg, Cu, P by deconvolution of it Tl curve

    International Nuclear Information System (INIS)

    Gonzalez, P.R.; Azorin, J.; Furetta, C.; Lopez, J.

    2004-01-01

    The results of the study of the stability of the traps are presented in Tl dosemeters of LiF: Mg,Cu,P + Ptfe, developed in the ININ, taking like reference to the commercial dosemeter GR200A of Chinese factory. The readings taken Tl the same day of the irradiation they presented four peaks whose energy, determined by deconvolution were; 1.30 ± 0.01 eV, 1.50 ± 0.01 eV, 1.70 ± 0.01 eV and 2.58± 0.02 eV, for LiF: Mg,Cu,P + Ptfe, while for GR200A the energies were: 1.33 ± 0.11 eV, 1.58 ± 0.11 eV, 1.73 ± 0.11 eV and 2.60 ± 0.03 eV. The energy of the peaks 3 and 4 that remained visible during six months of study it was: 1.38 ± 0.01 eV and 2.65 ± 0.01 eV, for LiF: Mg,Cu,P + Ptfe respectively, in the same order for GR200A, the energies were: 1.51 ± 0.02 eV and 2.64 ± 0.03 eV. (Author)

  17. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  18. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    Science.gov (United States)

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; preliability for all metrics in callosal fibers (0.85-1.00; preliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Generalized quantum statistics

    International Nuclear Information System (INIS)

    Chou, C.

    1992-01-01

    In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics

  20. Evaluation of observables in statistical multifragmentation theories

    International Nuclear Information System (INIS)

    Cole, A.J.

    1989-01-01

    The canonical formulation of equilibrium statistical multifragmentation is examined. It is shown that the explicit construction of observables (average values) by sampling the partition probabilities is unnecessary insofar as closed expressions in the form of recursion relations can be obtained quite easily. Such expressions may conversely be used to verify the sampling algorithms

  1. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    The Manual is written in a question-and-answer format. The points developed are introduced with a basic question, such as: What do people mean by 'fuels' and 'energy'? What units are used to express oil? How are energy data presented? Answers are given in simple terms and illustrated by graphs, charts and tables. More technical explanations are found in the annexes. The Manual contains seven chapters. The first one presents the fundamentals of energy statistics, five chapters deal with the five different fuels (electricity and heat; natural gas; oil; solid fuels and manufactured gases; renewables and waste) and the last chapter explains the energy balance. Three technical annexes and a glossary are also included. For the five chapters dedicated to the fuels, there are three levels of reading: the first one contains general information on the subject, the second one reviews issues which are specific to the joint IEA/OECD-Eurostat-UNECE questionnaires and the third one focuses on the essential elements of the subject. 43 figs., 22 tabs., 3 annexes.

  2. ddClone: joint statistical inference of clonal populations from single cell and bulk tumour sequencing data.

    Science.gov (United States)

    Salehi, Sohrab; Steif, Adi; Roth, Andrew; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2017-03-01

    Next-generation sequencing (NGS) of bulk tumour tissue can identify constituent cell populations in cancers and measure their abundance. This requires computational deconvolution of allelic counts from somatic mutations, which may be incapable of fully resolving the underlying population structure. Single cell sequencing (SCS) is a more direct method, although its replacement of NGS is impeded by technical noise and sampling limitations. We propose ddClone, which analytically integrates NGS and SCS data, leveraging their complementary attributes through joint statistical inference. We show on real and simulated datasets that ddClone produces more accurate results than can be achieved by either method alone.

  3. National Statistical Commission and Indian Official Statistics

    Indian Academy of Sciences (India)

    Author Affiliations. T J Rao1. C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS) University of Hyderabad Campus Central University Post Office, Prof. C. R. Rao Road Hyderabad 500 046, AP, India.

  4. Ab initio Algorithmic Causal Deconvolution of Intertwined Programs and Networks by Generative Mechanism

    KAUST Repository

    Zenil, Hector

    2018-02-18

    To extract and learn representations leading to generative mechanisms from data, especially without making arbitrary decisions and biased assumptions, is a central challenge in most areas of scientific research particularly in connection to current major limitations of influential topics and methods of machine and deep learning as they have often lost sight of the model component. Complex data is usually produced by interacting sources with different mechanisms. Here we introduce a parameter-free model-based approach, based upon the seminal concept of Algorithmic Probability, that decomposes an observation and signal into its most likely algorithmic generative mechanisms. Our methods use a causal calculus to infer model representations. We demonstrate the method ability to distinguish interacting mechanisms and deconvolve them, regardless of whether the objects produce strings, space-time evolution diagrams, images or networks. We numerically test and evaluate our method and find that it can disentangle observations from discrete dynamic systems, random and complex networks. We think that these causal inference techniques can contribute as key pieces of information for estimations of probability distributions complementing other more statistical-oriented techniques that otherwise lack model inference capabilities.

  5. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  6. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  7. Homeopathic potencies of Arnica montana L. change gene expression in a Tamm-Horsfall protein-1 cell line in vitro model: the role of ethanol as a possible confounder and statistical bias.

    Science.gov (United States)

    Chirumbolo, Salvatore; Bjørklund, Geir

    2017-07-01

    Marzotto et al. showed that homeopathic preparations of Arnica montana L. acted directly on gene expression of Tamm-Horsfall protein-1 (THP-1) monocyte/macrophage cell lines activated with phorbol12-myristate13-acetate and interleukin-4 (IL-4). A. montana homeopathic dilutions are used in complementary and alternative medicine to treat inflammation disorders and post-traumatic events as well as for wound repair. The French Pharmacopoeia of these remedies uses 0.3% ethanol in each centesimal dilution. In this paper, we discuss how ethanol-containing A. montana homeopathic centesimal dilutions can change gene expression in IL-4-treated monocyte/macrophage THP-1. We assessed the role of ethanol in the Arnica homeopathic dilutions containing this alcohol by investigating its action on gene expression of THP-1 cell. Evidence would strongly suggest that the presence of ethanol in these remedies might play a fundamental role in the dilutions ability to affect gene expression, particularly for doses from 5c to 15c. Where, rather than playing a major role in the mesoscopic structure of water, the ethanol might have a chemical-physical role in the induction of THP-1 gene expression, apoptosis, and deoxyribonucleic acid function. This evidence generates a debate about the suggestion that the use of a binary-mixed solvent in homeopathic chemistry, used by Hahnemann since 1810, may be fundamental to explain the activity of homeopathy on cell models.

  8. Classical model of intermediate statistics

    International Nuclear Information System (INIS)

    Kaniadakis, G.

    1994-01-01

    In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions

  9. Recreational Boating Statistics 2012

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  10. Recreational Boating Statistics 2013

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  11. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  12. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  13. Recreational Boating Statistics 2011

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  14. Uterine Cancer Statistics

    Science.gov (United States)

    ... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...

  15. Tuberculosis Data and Statistics

    Science.gov (United States)

    ... Advisory Groups Federal TB Task Force Data and Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... Set) Mortality and Morbidity Weekly Reports Data and Statistics Decrease in Reported Tuberculosis Cases MMWR 2010; 59 ( ...

  16. National transportation statistics 2011

    Science.gov (United States)

    2011-04-01

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics : (BTS), National Transportation Statistics presents information on the U.S. transportation system, including : its physical components, safety reco...

  17. National Transportation Statistics 2008

    Science.gov (United States)

    2009-01-08

    Compiled and published by the U.S. Department of Transportations Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record...

  18. Mental Illness Statistics

    Science.gov (United States)

    ... News & Events About Us Home > Health Information Share Statistics Research shows that mental illnesses are common in ... of mental illnesses, such as suicide and disability. Statistics Top ı cs Mental Illness Any Anxiety Disorder ...

  19. School Violence: Data & Statistics

    Science.gov (United States)

    ... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... Vehicle Safety Traumatic Brain Injury Injury Response Data & Statistics (WISQARS) Funded Programs Press Room Social Media Publications ...

  20. Caregiver Statistics: Demographics

    Science.gov (United States)

    ... You are here Home Selected Long-Term Care Statistics Order this publication Printer-friendly version What is ... needs and services are wide-ranging and complex, statistics may vary from study to study. Sources for ...

  1. Aortic Aneurysm Statistics

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  2. Alcohol Facts and Statistics

    Science.gov (United States)

    ... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1238–1245, 2004. PMID: 15010446 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

  3. National Transportation Statistics 2009

    Science.gov (United States)

    2010-01-21

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...

  4. National transportation statistics 2010

    Science.gov (United States)

    2010-01-01

    National Transportation Statistics presents statistics on the U.S. transportation system, including its physical components, safety record, economic performance, the human and natural environment, and national security. This is a large online documen...

  5. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  6. Principles of applied statistics

    National Research Council Canada - National Science Library

    Cox, D. R; Donnelly, Christl A

    2011-01-01

    .... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

  7. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  8. Interactive statistics with ILLMO

    NARCIS (Netherlands)

    Martens, J.B.O.S.

    2014-01-01

    Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured

  9. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  10. Youth Sports Safety Statistics

    Science.gov (United States)

    ... 6):794-799. 31 American Heart Association. CPR statistics. www.heart.org/HEARTORG/CPRAndECC/WhatisCPR/CPRFactsandStats/CPRpercent20Statistics_ ... Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (January 10, 2013). The DAWN Report: ...

  11. Force and moment reconstruction for a nuclear transportation cask using sum of weighted accelerations and deconvolution theory

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Bateman, V.; Carne, T.G.; Gregory, D.L.; Attaway, S.W.; Bronowski, D.R.

    1989-01-01

    A 9-m drop test was conducted of a 1/3-scale-model spent fuel cask onto an unyielding target. The structural response of the impact limiters and attachments was evaluated. A mass model of the cask body, with steel-sheathed redwood and balsa impact limiters, was tested in a 10-degree slapdown orientation. One end of the cask impact the target before the other end, with higher deceleration forces resulting from the second impact. The information desired from this test is the deformation of the two impact limiters on either end of the cask as a function of the applied force. The content in this paper will only discuss a summary of the applied force calculations. Additional details about the force and moment reconstruction methods and analysis results and test and hardware are provided elsewhere. Two new force reconstruction techniques were applied to the slapdown test data: the sum of weighted accelerations technique (SWAT) and deconvolution (DECON). The rigid-body acceleration is then multiplied by the cask mass to obtain an estimate of the applied force. The frequency content of this force is restricted to the cut-off frequency of the digital filter, typically about one-half of the lowest elastic mode of the cask. The new force reconstruction techniques demonstrate the potential for a better estimate of forces acting on the cask during the impact than the conventional method. The new force reconstruction techniques use the cask structure as a generalized force transducer. With these techniques, the elastic vibration response of the cask is eliminated from the acceleration data. The main advantages of the force reconstruction techniques are the extension of the frequency bandwidth (due to the elimination of the elastic modal response in that bandwidth) and the preservation of the force rise time

  12. Investigation of physico-chemical processes in lithium-ion batteries by deconvolution of electrochemical impedance spectra

    Science.gov (United States)

    Manikandan, Balasundaram; Ramar, Vishwanathan; Yap, Christopher; Balaya, Palani

    2017-09-01

    The individual physico-chemical processes in lithium-ion batteries namely solid-state diffusion and charge transfer polarization are difficult to be tracked by impedance spectroscopy due to simultaneous contributions from cathode and anode. A deeper understanding of various polarization processes in lithium-ion batteries is important to enhance storage performance and cycle life. In this context, the polarization processes occurring in cylindrical 18650 cells comprising different cathodes against graphite anode (LiNi0.2Mn0.2Co0.6O2vs. graphite; LiNi0.6Mn0.2Co0.2O2vs. graphite; LiNi0.8Co0.15Al0.05O2vs. graphite and LiFePO4vs. graphite) are investigated by deconvolution of impedance spectra across various states of charge. Further, cathodes and anodes are extracted from the investigated 18650-type cells and tested in half-cells against Li-metal as well as in symmetric cell configurations to understand the contribution of cathode and anode to the full cells of various battery chemistries studied. Except for the LiFePO4vs. graphite cell, the polarization resistance in graphite of other cells are found to be higher than those of the investigated cathodes, proving that the polarization in lithium-ion battery is largely influenced by the graphitic anode. Furthermore, the charge transfer polarization resistance encountered by the cathodes investigated in this work is found to be a strong function of the states of charge.

  13. Rolling bearing fault diagnosis based on time-delayed feedback monostable stochastic resonance and adaptive minimum entropy deconvolution

    Science.gov (United States)

    Li, Jimeng; Li, Ming; Zhang, Jinfeng

    2017-08-01

    Rolling bearings are the key components in the modern machinery, and tough operation environments often make them prone to failure. However, due to the influence of the transmission path and background noise, the useful feature information relevant to the bearing fault contained in the vibration signals is weak, which makes it difficult to identify the fault symptom of rolling bearings in time. Therefore, the paper proposes a novel weak signal detection method based on time-delayed feedback monostable stochastic resonance (TFMSR) system and adaptive minimum entropy deconvolution (MED) to realize the fault diagnosis of rolling bearings. The MED method is employed to preprocess the vibration signals, which can deconvolve the effect of transmission path and clarify the defect-induced impulses. And a modified power spectrum kurtosis (MPSK) index is constructed to realize the adaptive selection of filter length in the MED algorithm. By introducing the time-delayed feedback item in to an over-damped monostable system, the TFMSR method can effectively utilize the historical information of input signal to enhance the periodicity of SR output, which is beneficial to the detection of periodic signal. Furthermore, the influence of time delay and feedback intensity on the SR phenomenon is analyzed, and by selecting appropriate time delay, feedback intensity and re-scaling ratio with genetic algorithm, the SR can be produced to realize the resonance detection of weak signal. The combination of the adaptive MED (AMED) method and TFMSR method is conducive to extracting the feature information from strong background noise and realizing the fault diagnosis of rolling bearings. Finally, some experiments and engineering application are performed to evaluate the effectiveness of the proposed AMED-TFMSR method in comparison with a traditional bistable SR method.

  14. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  15. Statistics for Research

    CERN Document Server

    Dowdy, Shirley; Chilko, Daniel

    2011-01-01

    Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."-The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible f

  16. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  17. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  18. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  19. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  20. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  1. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  2. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  3. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  4. Telling the truth with statistics

    CERN Multimedia

    CERN. Geneva; CERN. Geneva. Audiovisual Unit

    2002-01-01

    This course of lectures will cover probability, distributions, fitting, errors and confidence levels, for practising High Energy Physicists who need to use Statistical techniques to express their results. Concentrating on these appropriate specialist techniques means that they can be covered in appropriate depth, while assuming only the knowledge and experience of a typical Particle Physicist. The different definitions of probability will be explained, and it will be appear why this basic subject is so controversial; there are several viewpoints and it is important to understand them all, rather than abusing the adherents of different beliefs. Distributions will be covered: the situations they arise in, their useful properties, and the amazing result of the Central Limit Theorem. Fitting a parametrisation to a set of data is one of the most widespread uses of statistics: these are lots of ways of doing this and these will be presented, with discussion of which is appropriate in different circumstances. This t...

  5. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    Science.gov (United States)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  6. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  7. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  8. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  9. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  10. Tropical geometry of statistical models.

    Science.gov (United States)

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    This article presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. Here, we address the question of how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. The Newton polytope of a statistical model plays a key role. Our results are applied to the hidden Markov model and the general Markov model on a binary tree.

  11. Application of stable isotopes and isotope pattern deconvolution-ICPMS to speciation of endogenous and exogenous Fe and Se in rats

    International Nuclear Information System (INIS)

    Gonzalez Iglesias, H.; Fernandez-Sanchez, M.L.; Garcia Alonso, J.I.; Lopez Sastre, J.B.; Sanz-Medel, A.

    2009-01-01

    Full text: Enriched stable isotopes are crucial to study essential trace element metabolism (e.g. Se, Fe) in biological systems. Measuring isotope ratios by ICPMS and using appropriate mathematical calculations, based on isotope pattern deconvolution (IPD) may provide quantitative data about endogenous and exogenous essential or toxic elements and their metabolism. In this work, IPD was applied to explore the feasibility of using two Se (or Fe) enriched stable isotopes, one as metabolic tracer and the other as quantitation tracer, to discriminate between the endogenous and supplemented Se (or Fe) species in rat fluids by collision cell ICPMS coupled to HPLC separation. (author)

  12. Statistical Physics An Introduction

    CERN Document Server

    Yoshioka, Daijiro

    2007-01-01

    This book provides a comprehensive presentation of the basics of statistical physics. The first part explains the essence of statistical physics and how it provides a bridge between microscopic and macroscopic phenomena, allowing one to derive quantities such as entropy. Here the author avoids going into details such as Liouville’s theorem or the ergodic theorem, which are difficult for beginners and unnecessary for the actual application of the statistical mechanics. In the second part, statistical mechanics is applied to various systems which, although they look different, share the same mathematical structure. In this way readers can deepen their understanding of statistical physics. The book also features applications to quantum dynamics, thermodynamics, the Ising model and the statistical dynamics of free spins.

  13. The statistical stability phenomenon

    CERN Document Server

    Gorban, Igor I

    2017-01-01

    This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...

  14. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee

    2000-01-01

    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  15. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  16. Mineral industry statistics 1975

    Energy Technology Data Exchange (ETDEWEB)

    1978-01-01

    Production, consumption and marketing statistics are given for solid fuels (coal, peat), liquid fuels and gases (oil, natural gas), iron ore, bauxite and other minerals quarried in France, in 1975. Also accident statistics are included. Production statistics are presented of the Overseas Departments and territories (French Guiana, New Caledonia, New Hebrides). An account of modifications in the mining field in 1975 is given. Concessions, exploitation permits, and permits solely for prospecting for mineral products are discussed. (In French)

  17. Lectures on statistical mechanics

    CERN Document Server

    Bowler, M G

    1982-01-01

    Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent

  18. Introduction to Statistics

    Directory of Open Access Journals (Sweden)

    Mirjam Nielen

    2017-01-01

    Full Text Available Always wondered why research papers often present rather complicated statistical analyses? Or wondered how to properly analyse the results of a pragmatic trial from your own practice? This talk will give an overview of basic statistical principles and focus on the why of statistics, rather than on the how.This is a podcast of Mirjam's talk at the Veterinary Evidence Today conference, Edinburgh November 2, 2016. 

  19. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  20. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt