WorldWideScience

Sample records for block-circulant deconvolution matrix

  1. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  2. TOEPLITZ, Solution of Linear Equation System with Toeplitz or Circulant Matrix

    International Nuclear Information System (INIS)

    Garbow, B.

    1984-01-01

    Description of program or function: TOEPLITZ is a collection of FORTRAN subroutines for solving linear systems Ax=b, where A is a Toeplitz matrix, a Circulant matrix, or has one or several block structures based on Toeplitz or Circulant matrices. Such systems arise in problems of electrodynamics, acoustics, mathematical statistics, algebra, in the numerical solution of integral equations with a difference kernel, and in the theory of stationary time series and signals

  3. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  4. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  5. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  6. Blocking device especially for circulating pumps

    International Nuclear Information System (INIS)

    Susil, J.; Vychodil, V.; Lorenc, P.

    1976-01-01

    The claim of the invention is a blocking device which blocks reverse flow occurring after the shutdown of circulating pumps, namely in the operation of nuclear power plants or in pumps with a high delivery head. (F.M.)

  7. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  8. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  9. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  10. Approximating the imbibition and absorption behavior of a distribution of matrix blocks by an equivalent spherical block

    International Nuclear Information System (INIS)

    Zimmerman, R.W.; Bodvarsson, G.S.

    1994-03-01

    A theoretical study is presented of the effect of matrix block shape and matrix block size distribution on liquid imbibition and solute absorption in a fractured rock mass. It is shown that the behavior of an individual irregularly-shaped matrix block can be modeled with reasonable accuracy by using the results for a spherical matrix block, if one uses an effective radius a = 3V/A, where V is the volume of the block and A is its surface area. In the early-time regime of matrix imbibition, it is shown that a collection of blocks of different sizes can be modeled by a single equivalent block, with an equivalent radius of -1 > -1 , where the average is taken on a volumetrically-weighted basis. In an intermediate time regime, it is shown for the case where the radii are normally distributed that the equivalent radius is reasonably well approximated by the mean radius . In the long-time limit, where no equivalent radius can be rigorously defined, an asymptotic expression is derived for the cumulative diffusion as a function of the mean and the standard deviation of the radius distribution function

  11. BCYCLIC: A parallel block tridiagonal matrix cyclic solver

    Science.gov (United States)

    Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.

    2010-09-01

    A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.

  12. Block circulant and block Toeplitz approximants of a class of spatially distributed systems-An LQR perspective

    NARCIS (Netherlands)

    Iftime, Orest V.

    2012-01-01

    In this paper block circulant and block Toeplitz long strings of MIMO systems with finite length are compared with their corresponding infinite-dimensional spatially invariant systems. The focus is on the convergence of the sequence of solutions to the control Riccati equations and the convergence

  13. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  14. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  15. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  16. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  17. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  18. A study of block algorithms for fermion matrix inversion

    International Nuclear Information System (INIS)

    Henty, D.

    1990-01-01

    We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)

  19. Model-based characterization of the transpulmonary circulation by DCE-MRI

    NARCIS (Netherlands)

    Saporito, S.; Herold, I.H.F.; Houthuizen, P.; den Boer, J.; Van Den Bosch, H.; Korsten, H.; van Assen, H.C.; Mischi, M.

    2016-01-01

    Objective measures to assess pulmonary circulation status would improve heart failure patient care. We propose a method for the characterization of the transpulmonary circulation by DCE-MRI. Parametric deconvolution was performed between contrast agent fifirst passage time-enhancement curves derived

  20. On the fusion matrix of the N=1 Neveu-Schwarz blocks

    OpenAIRE

    Hadasz, Leszek

    2007-01-01

    We propose an exact form of the fusion matrix of the Neveu-Schwarz blocks that appear in a correlation function of four super-primary fields. Orthogonality relation satisfied by this matrix is equivalent to the bootstrap equation for the four-point super-primary correlator in the N=1 supersymmetric Liouville theory.

  1. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  2. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  3. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  4. On the fusion matrix of the N = 1 Neveu-Schwarz blocks

    International Nuclear Information System (INIS)

    Hadasz, Leszek

    2007-01-01

    We propose an exact form of the fusion matrix of the Neveu-Schwarz blocks that appear in a correlation function of four super-primary fields. Orthogonality relation satisfied by this matrix is equivalent to the bootstrap equation for the four-point super-primary correlator in the N = 1 supersymmetric Liouville field theory

  5. On the fusion matrix of the N = 1 Neveu-Schwarz blocks

    Science.gov (United States)

    Hadasz, Leszek

    2007-12-01

    We propose an exact form of the fusion matrix of the Neveu-Schwarz blocks that appear in a correlation function of four super-primary fields. Orthogonality relation satisfied by this matrix is equivalent to the bootstrap equation for the four-point super-primary correlator in the N = 1 supersymmetric Liouville field theory.

  6. From spinning conformal blocks to matrix Calogero-Sutherland models

    Science.gov (United States)

    Schomerus, Volker; Sobko, Evgeny

    2018-04-01

    In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.

  7. Random matrix theory for pseudo-Hermitian systems: Cyclic blocks

    Indian Academy of Sciences (India)

    We discuss the relevance of random matrix theory for pseudo-Hermitian systems, and, for Hamiltonians that break parity and time-reversal invariance . In an attempt to understand the random Ising model, we present the treatment of cyclic asymmetric matrices with blocks and show that the nearest-neighbour spacing ...

  8. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  9. Random matrix theory for pseudo-Hermitian systems: Cyclic blocks

    Indian Academy of Sciences (India)

    Abstract. We discuss the relevance of random matrix theory for pseudo-Hermitian sys- tems, and, for Hamiltonians that break parity P and time-reversal invariance T. In an attempt to understand the random Ising model, we present the treatment of cyclic asym- metric matrices with blocks and show that the nearest-neighbour ...

  10. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  11. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  12. Circulating matrix metalloproteinases are associated with arterial stiffness in patients with type 1 diabetes

    DEFF Research Database (Denmark)

    Peeters, Stijn A.; Engelen, Lian; Buijs, Jacqueline

    2017-01-01

    BACKGROUND: Altered regulation of extracellular matrix (ECM) composition by matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinase (TIMPs) may contribute to arterial stiffening. We investigated associations between circulating MMP-1, -2, -3, -9, -10 and TIMP-1, and carotid......). Linear regression analyses were used to investigate cross-sectional associations between circulating levels of MMP-1, -2, -3, -9, -10, and TIMP-1 and cfPWV (n = 614) as well as office PP (n = 1517). Data on 24-h brachial and 24-h central PP were available in 638 individuals from PROFIL. Analyses were...... was associated with cfPWV [β per 1 SD higher lnMMP3 0.29 m/s (0.02; 0.55)]. In addition, brachial and central 24-h PP measurements in PROFIL were significantly associated with MMP-2 [(1.40 (0.47:2.33) and 1.43 (0.63:2.23)]. Pooled data analysis showed significant associations of circulating levels of MMP-1...

  13. Circulating blocking factors of lymphoid-cell cytotoxicity in x-ray-induced rat small-bowel adenocarcinoma

    International Nuclear Information System (INIS)

    Stevens, R.H.; Brooks, G.P.; Osborne, J.W.

    1979-01-01

    Circulating blocking factors capable of abrogating cell-mediated immune responses measured by in vitro lymphoid-cell cytotoxicity were identified in the sera of Holtzman outbred rats 6 to 9 months after a single exposure of only the temporarily exteriorized, hypoxic ileum and jejunum to 1700 to 2000 R of X radiation. Such factors were found to exist in the serum of every animal exposed to the ionizing radiation regardless of whether a visibly identifiable small-bowel adenocarcinoma existed or subsequently would develop. Protection of cultured x-ray-induced rat small-bowel cancer cells from destruction by tumor-sensitized lymphoid cells as measured by the release of lactoperoxidase-catalyzed radioiodinated membrane proteins from the tumor target cells was conferred by the action of the blocking factors at both effector and target cell levels. The results of this study demonstrate that exposure of only the rat small intestine to ionizing radiation leads to elaboration of circulating factors identifiable several months postirradiation which will block cell-mediated immune responses directed against cancer cells developing in the exposed tissue

  14. Link Prediction via Convex Nonnegative Matrix Factorization on Multiscale Blocks

    Directory of Open Access Journals (Sweden)

    Enming Dong

    2014-01-01

    Full Text Available Low rank matrices approximations have been used in link prediction for networks, which are usually global optimal methods and lack of using the local information. The block structure is a significant local feature of matrices: entities in the same block have similar values, which implies that links are more likely to be found within dense blocks. We use this insight to give a probabilistic latent variable model for finding missing links by convex nonnegative matrix factorization with block detection. The experiments show that this method gives better prediction accuracy than original method alone. Different from the original low rank matrices approximations methods for link prediction, the sparseness of solutions is in accord with the sparse property for most real complex networks. Scaling to massive size network, we use the block information mapping matrices onto distributed architectures and give a divide-and-conquer prediction method. The experiments show that it gives better results than common neighbors method when the networks have a large number of missing links.

  15. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  16. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  17. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  18. The Invertibility, Explicit Determinants, and Inverses of Circulant and Left Circulant and g-Circulant Matrices Involving Any Continuous Fibonacci and Lucas Numbers

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available Circulant matrices play an important role in solving delay differential equations. In this paper, circulant type matrices including the circulant and left circulant and g-circulant matrices with any continuous Fibonacci and Lucas numbers are considered. Firstly, the invertibility of the circulant matrix is discussed and the explicit determinant and the inverse matrices by constructing the transformation matrices are presented. Furthermore, the invertibility of the left circulant and g-circulant matrices is also studied. We obtain the explicit determinants and the inverse matrices of the left circulant and g-circulant matrices by utilizing the relationship between left circulant, g-circulant matrices and circulant matrix, respectively.

  19. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  20. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  1. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  2. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  3. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  4. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  5. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  6. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  7. Deletions in the fifth alpha helix of HIV-1 matrix block virus release

    International Nuclear Information System (INIS)

    Sanford, Bridget; Li, Yan; Maly, Connor J.; Madson, Christian J.; Chen, Han; Zhou, You; Belshan, Michael

    2014-01-01

    The matrix (MA) protein of HIV-1 is the N-terminal component of the Gag structural protein and is critical for the early and late stages of viral replication. MA contains five α-helices (α1–α5). Deletions in the N-terminus of α5 as small as three amino acids impaired virus release. Electron microscopy of one deletion mutant (MA∆96-120) showed that its particles were tethered to the surface of cells by membranous stalks. Immunoblots indicated all mutants were processed completely, but mutants with large deletions had alternative processing intermediates. Consistent with the EM data, MA∆96-120 retained membrane association and multimerization capability. Co-expression of this mutant inhibited wild type particle release. Alanine scanning mutation in this region did not affect virus release, although the progeny virions were poorly infectious. Combined, these data demonstrate that structural ablation of the α5 of MA inhibits virus release. - Highlights: • Deletions were identified in the C-terminus of matrix that block virus release. • These deletion mutants still multimerized and associated with membranes. • TEM showed the mutant particles were tethered to the cell surface. • Amino acid mutagenesis of the region did not affect release. • The data suggests that disruption of matrix structure blocks virus release

  8. Deletions in the fifth alpha helix of HIV-1 matrix block virus release

    Energy Technology Data Exchange (ETDEWEB)

    Sanford, Bridget; Li, Yan; Maly, Connor J.; Madson, Christian J. [Department of Medical Microbiology and Immunology, Creighton University, 2500 California Plaza, Omaha, NE 68178 (United States); Chen, Han [Center for Biotechnology, University of Nebraska-Lincoln, Lincoln, NE (United States); Zhou, You [Center for Biotechnology, University of Nebraska-Lincoln, Lincoln, NE (United States); Nebraska Center for Virology, Lincoln, NE (United States); Belshan, Michael, E-mail: michaelbelshan@creighton.edu [Department of Medical Microbiology and Immunology, Creighton University, 2500 California Plaza, Omaha, NE 68178 (United States); Nebraska Center for Virology, Lincoln, NE (United States)

    2014-11-15

    The matrix (MA) protein of HIV-1 is the N-terminal component of the Gag structural protein and is critical for the early and late stages of viral replication. MA contains five α-helices (α1–α5). Deletions in the N-terminus of α5 as small as three amino acids impaired virus release. Electron microscopy of one deletion mutant (MA∆96-120) showed that its particles were tethered to the surface of cells by membranous stalks. Immunoblots indicated all mutants were processed completely, but mutants with large deletions had alternative processing intermediates. Consistent with the EM data, MA∆96-120 retained membrane association and multimerization capability. Co-expression of this mutant inhibited wild type particle release. Alanine scanning mutation in this region did not affect virus release, although the progeny virions were poorly infectious. Combined, these data demonstrate that structural ablation of the α5 of MA inhibits virus release. - Highlights: • Deletions were identified in the C-terminus of matrix that block virus release. • These deletion mutants still multimerized and associated with membranes. • TEM showed the mutant particles were tethered to the cell surface. • Amino acid mutagenesis of the region did not affect release. • The data suggests that disruption of matrix structure blocks virus release.

  9. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  10. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  11. VanderLaan Circulant Type Matrices

    Directory of Open Access Journals (Sweden)

    Hongyan Pan

    2015-01-01

    Full Text Available Circulant matrices have become a satisfactory tools in control methods for modern complex systems. In the paper, VanderLaan circulant type matrices are presented, which include VanderLaan circulant, left circulant, and g-circulant matrices. The nonsingularity of these special matrices is discussed by the surprising properties of VanderLaan numbers. The exact determinants of VanderLaan circulant type matrices are given by structuring transformation matrices, determinants of well-known tridiagonal matrices, and tridiagonal-like matrices. The explicit inverse matrices of these special matrices are obtained by structuring transformation matrices, inverses of known tridiagonal matrices, and quasi-tridiagonal matrices. Three kinds of norms and lower bound for the spread of VanderLaan circulant and left circulant matrix are given separately. And we gain the spectral norm of VanderLaan g-circulant matrix.

  12. Study of matrix micro-cracking in nano clay and acrylic tri-block-copolymer modified epoxy/basalt fiber-reinforced pressure-retaining structures

    Directory of Open Access Journals (Sweden)

    2011-10-01

    Full Text Available In fiber-reinforced polymer pressure-retaining structures, such as pipes and vessels, micro-level failure commonly causes fluid permeation due to matrix cracking. This study explores the effect of nano-reinforcements on matrix cracking in filament-wound basalt fiber/epoxy composite structures. The microstructure and mechanical properties of bulk epoxy nanocomposites and hybrid fiber-reinforced composite pipes modified with acrylic tri-block-copolymer and organophilic layered silicate clay were investigated. In cured epoxy, the tri-block-copolymer phase separated into disordered spherical micelle inclusions; an exfoliated and intercalated structure was observed for the nano-clay. Block-copolymer addition significantly enhanced epoxy fracture toughness by a mechanism of particle cavitation and matrix shear yielding, whereas toughness remained unchanged in nano-clay filled nanocomposites due to the occurrence of lower energy resistance phenomena such as crack deflection and branching.Tensile stiffness increased with nano-clay content, while it decreased slightly for block-copolymer modified epoxy. Composite pipes modified with either the organic and inorganic nanoparticles exhibited moderate improvements in leakage failure strain (i.e. matrix cracking strain; however, reductions in functional and structural failure strength were observed.

  13. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  14. Jordan blocks and Gamow-Jordan eigenfunctions associated to a double pole of the S-matrix

    International Nuclear Information System (INIS)

    Hernandez, E.; Mondragon, A.; Jauregui, A.

    2002-01-01

    An accidental degeneracy of resonances gives rise to a double pole in the scattering matrix, a double zero in the Jost function and a Jordan chain of length two of generalized Gamow-Jordan eigenfunctions of the radial Schrodinger equation. The generalized Gamow-Jordan eigenfunctions are basis elements of an expansion in bound and resonant energy eigenfunctions plus a continuum of scattering wave functions ol complex wave number. In this bi orthonormal basis, any operator f (H r (l) which is a regular function of the Hamiltonian is represented by a complex matrix which is diagonal except for a Jordan block of rank two. The occurrence of a double pole in the Green's function, as well as the non-exponential time evolution of the Gamow-Jordan generalized eigenfunctions are associated to the Jordan block in the complex energy representation. (Author)

  15. Reduction of Under-Determined Linear Systems by Sparce Block Matrix Technique

    DEFF Research Database (Denmark)

    Tarp-Johansen, Niels Jacob; Poulsen, Peter Noe; Damkilde, Lars

    1996-01-01

    numerical stability of the aforementioned reduction. Moreover the coefficient matrix for the equilibrium equations is typically very sparse. The objective is to deal efficiently with the full pivoting reduction of sparse rectangular matrices using a dynamic storage scheme based on the block matrix concept.......Under-determined linear equation systems occur in different engineering applications. In structural engineering they typically appear when applying the force method. As an example one could mention limit load analysis based on The Lower Bound Theorem. In this application there is a set of under......-determined equilibrium equation restrictions in an LP-problem. A significant reduction of computer time spent on solving the LP-problem is achieved if the equilib rium equations are reduced before going into the optimization procedure. Experience has shown that for some structures one must apply full pivoting to ensure...

  16. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  17. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  18. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  19. Matrix protein 2 of influenza A virus blocks autophagosome fusion with lysosomes

    DEFF Research Database (Denmark)

    Gannagé, Monique; Dormann, Dorothee; Albrecht, Randy

    2009-01-01

    Influenza A virus is an important human pathogen causing significant morbidity and mortality every year and threatening the human population with epidemics and pandemics. Therefore, it is important to understand the biology of this virus to develop strategies to control its pathogenicity. Here, we...... demonstrate that influenza A virus inhibits macroautophagy, a cellular process known to be manipulated by diverse pathogens. Influenza A virus infection causes accumulation of autophagosomes by blocking their fusion with lysosomes, and one viral protein, matrix protein 2, is necessary and sufficient...... for this inhibition of autophagosome degradation. Macroautophagy inhibition by matrix protein 2 compromises survival of influenza virus-infected cells but does not influence viral replication. We propose that influenza A virus, which also encodes proapoptotic proteins, is able to determine the death of its host cell...

  20. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  1. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  2. Computing the sparse matrix vector product using block-based kernels without zero padding on processors with AVX-512 instructions

    Directory of Open Access Journals (Sweden)

    Bérenger Bramas

    2018-04-01

    Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.

  3. Application of blocking diagnosis methods to general circulation models. Part I: a novel detection scheme

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal); Universidad de Extremadura, Departamento de Fisica, Facultad de Ciencias, Badajoz (Spain); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain); Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Ed. C-8, Lisbon (Portugal)

    2010-12-15

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable

  4. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  5. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  6. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  7. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  8. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  9. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  10. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  11. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  12. Interference Cancellation Schemes for Single-Carrier Block Transmission with Insufficient Cyclic Prefix

    Directory of Open Access Journals (Sweden)

    Hayashi Kazunori

    2008-01-01

    Full Text Available Abstract This paper proposes intersymbol interference (ISI and interblock interference (IBI cancellation schemes at the transmitter and the receiver for the single-carrier block transmission with insufficient cyclic prefix (CP. The proposed scheme at the transmitter can exterminate the interferences by only setting some signals in the transmitted signal block to be the same as those of the previous transmitted signal block. On the other hand, the proposed schemes at the receiver can cancel the interferences without any change in the transmitted signals compared to the conventional method. The IBI components are reduced by using previously detected data signals, while for the ISI cancellation, we firstly change the defective channel matrix into a circulant matrix by using the tentative decisions, which are obtained by our newly derived frequency domain equalization (FDE, and then the conventional FDE is performed to compensate the ISI. Moreover, we propose a pilot signal configuration, which enables us to estimate a channel impulse response whose order is greater than the guard interval (GI. Computer simulations show that the proposed interference cancellation schemes can significantly improve bit error rate (BER performance, and the validity of the proposed channel estimation scheme is also demonstrated.

  13. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  14. Poly(ferrocenylsilane)-block-Polylactide Block Copolymers

    NARCIS (Netherlands)

    Roerdink, M.; van Zanten, Thomas S.; Hempenius, Mark A.; Zhong, Zhiyuan; Feijen, Jan; Vancso, Gyula J.

    2007-01-01

    A PFS/PLA block copolymer was studied to probe the effect of strong surface interactions on pattern formation in PFS block copolymer thin films. Successful synthesis of PFS-b-PLA was demonstrated. Thin films of these polymers show phase separation to form PFS microdomains in a PLA matrix, and

  15. Exact Inverse Matrices of Fermat and Mersenne Circulant Matrix

    Directory of Open Access Journals (Sweden)

    Yanpeng Zheng

    2015-01-01

    Full Text Available The well known circulant matrices are applied to solve networked systems. In this paper, circulant and left circulant matrices with the Fermat and Mersenne numbers are considered. The nonsingularity of these special matrices is discussed. Meanwhile, the exact determinants and inverse matrices of these special matrices are presented.

  16. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  17. Wolbachia Blocks Currently Circulating Zika Virus Isolates in Brazilian Aedes aegypti Mosquitoes.

    Science.gov (United States)

    Dutra, Heverton Leandro Carneiro; Rocha, Marcele Neves; Dias, Fernando Braga Stehling; Mansur, Simone Brutman; Caragata, Eric Pearce; Moreira, Luciano Andrade

    2016-06-08

    The recent association of Zika virus with cases of microcephaly has sparked a global health crisis and highlighted the need for mechanisms to combat the Zika vector, Aedes aegypti mosquitoes. Wolbachia pipientis, a bacterial endosymbiont of insect, has recently garnered attention as a mechanism for arbovirus control. Here we report that Aedes aegypti harboring Wolbachia are highly resistant to infection with two currently circulating Zika virus isolates from the recent Brazilian epidemic. Wolbachia-harboring mosquitoes displayed lower viral prevalence and intensity and decreased disseminated infection and, critically, did not carry infectious virus in the saliva, suggesting that viral transmission was blocked. Our data indicate that the use of Wolbachia-harboring mosquitoes could represent an effective mechanism to reduce Zika virus transmission and should be included as part of Zika control strategies. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  19. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction

    International Nuclear Information System (INIS)

    Yang, C L; Wei, H Y; Soleimani, M; Adler, A

    2013-01-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current–voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results. (paper)

  20. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    Science.gov (United States)

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  1. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  2. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  3. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  4. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  5. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  6. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  7. Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization

    KAUST Repository

    Peng, Chengbin

    2014-12-01

    Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.

  8. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  9. New block matrix spectral problem and Hamiltonian structure of the discrete integrable coupling system

    Energy Technology Data Exchange (ETDEWEB)

    Yu Fajun [College of Maths and Systematic Science, Shenyang Normal University, Shenyang 110034 (China)], E-mail: yufajun888@163.com

    2008-06-09

    In [W.X. Ma, J. Phys. A: Math. Theor. 40 (2007) 15055], Prof. Ma gave a beautiful result (a discrete variational identity). In this Letter, based on a discrete block matrix spectral problem, a new hierarchy of Lax integrable lattice equations with four potentials is derived. By using of the discrete variational identity, we obtain Hamiltonian structure of the discrete soliton equation hierarchy. Finally, an integrable coupling system of the soliton equation hierarchy and its Hamiltonian structure are obtained through the discrete variational identity.

  10. New block matrix spectral problem and Hamiltonian structure of the discrete integrable coupling system

    International Nuclear Information System (INIS)

    Yu Fajun

    2008-01-01

    In [W.X. Ma, J. Phys. A: Math. Theor. 40 (2007) 15055], Prof. Ma gave a beautiful result (a discrete variational identity). In this Letter, based on a discrete block matrix spectral problem, a new hierarchy of Lax integrable lattice equations with four potentials is derived. By using of the discrete variational identity, we obtain Hamiltonian structure of the discrete soliton equation hierarchy. Finally, an integrable coupling system of the soliton equation hierarchy and its Hamiltonian structure are obtained through the discrete variational identity

  11. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  12. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  13. Increased Obesity-Associated Circulating Levels of the Extracellular Matrix Proteins Osteopontin, Chitinase-3 Like-1 and Tenascin C Are Associated with Colon Cancer.

    Directory of Open Access Journals (Sweden)

    Victoria Catalán

    Full Text Available Excess adipose tissue represents a major risk factor for the development of colon cancer with inflammation and extracellular matrix (ECM remodeling being proposed as plausible mechanisms. The aim of this study was to investigate whether obesity can influence circulating levels of inflammation-related extracellular matrix proteins in patients with colon cancer (CC, promoting a microenvironment favorable for tumor growth.Serum samples obtained from 79 subjects [26 lean (LN and 53 obese (OB] were used in the study. Enrolled subjects were further subclassified according to the established diagnostic protocol for CC (44 without CC and 35 with CC. Anthropometric measurements as well as circulating metabolites and hormones were determined. Circulating concentrations of the ECM proteins osteopontin (OPN, chitinase-3-like protein 1 (YKL-40, tenascin C (TNC and lipocalin-2 (LCN-2 were determined by ELISA.Significant differences in circulating OPN, YKL-40 and TNC concentrations between the experimental groups were observed, being significantly increased due to obesity (P<0.01 and colon cancer (P<0.05. LCN-2 levels were affected by obesity (P<0.05, but no differences were detected regarding the presence or not of CC. A positive association (P<0.05 with different inflammatory markers was also detected.To our knowledge, we herein show for the first time that obese patients with CC exhibit increased circulating levels of OPN, YKL-40 and TNC providing further evidence for the influence of obesity on CC development via ECM proteins, representing promising diagnostic biomarkers or target molecules for therapeutics.

  14. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  15. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  16. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  17. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  18. Density matrix renormalization group for a highly degenerate quantum system: Sliding environment block approach

    Science.gov (United States)

    Schmitteckert, Peter

    2018-04-01

    We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.

  19. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  20. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  1. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  2. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  3. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  4. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  5. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  6. The Explicit Identities for Spectral Norms of Circulant-Type Matrices Involving Binomial Coefficients and Harmonic Numbers

    Directory of Open Access Journals (Sweden)

    Jianwei Zhou

    2014-01-01

    Full Text Available The explicit formulae of spectral norms for circulant-type matrices are investigated; the matrices are circulant matrix, skew-circulant matrix, and g-circulant matrix, respectively. The entries are products of binomial coefficients with harmonic numbers. Explicit identities for these spectral norms are obtained. Employing these approaches, some numerical tests are listed to verify the results.

  7. Moderator circulation in CANDU reactors

    International Nuclear Information System (INIS)

    Fath, H.E.S.; Hussein, M.A.

    1989-01-01

    A two-dimensional computer code that is capable of predicting the moderator flow and temperature distribution inside CANDU calandria is presented. The code uses a new approach to simulate the calandria tube matrix by blocking the cells containing the tubes in the finite difference mesh. A jet momentum-dominant flow pattern is predicted in the nonisothermal case, and the effect of the buoyancy force, resulting from nuclear heating, is found to enhance the speed of circulation. Hot spots are located in low-velocity areas at the top of the calandria and below the inlet jet level between the fuel channels. A parametric study is carried out to investigate the effect of moderator inlet velocity,moderator inlet nozzle location, and geometric scaling. The results indicate that decreasing the moderator inlet velocity has no significant influence on the general features of the flow pattern (i.e., momentum dominant); however, too many high-temperature hot spots appear within the fuel channels

  8. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  9. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  10. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  11. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  12. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  14. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  15. Deconvoluting the mechanism of microwave annealing of block copolymer thin films.

    Science.gov (United States)

    Jin, Cong; Murphy, Jeffrey N; Harris, Kenneth D; Buriak, Jillian M

    2014-04-22

    The self-assembly of block copolymer (BCP) thin films is a versatile method for producing periodic nanoscale patterns with a variety of shapes. The key to attaining a desired pattern or structure is the annealing step undertaken to facilitate the reorganization of nanoscale phase-segregated domains of the BCP on a surface. Annealing BCPs on silicon substrates using a microwave oven has been shown to be very fast (seconds to minutes), both with and without contributions from solvent vapor. The mechanism of the microwave annealing process remains, however, unclear. This work endeavors to uncover the key steps that take place during microwave annealing, which enable the self-assembly process to proceed. Through the use of in situ temperature monitoring with a fiber optic temperature probe in direct contact with the sample, we have demonstrated that the silicon substrate on which the BCP film is cast is the dominant source of heating if the doping of the silicon wafer is sufficiently low. Surface temperatures as high as 240 °C are reached in under 1 min for lightly doped, high resistivity silicon wafers (n- or p-type). The influence of doping, sample size, and BCP composition was analyzed to rule out other possible mechanisms. In situ temperature monitoring of various polymer samples (PS, P2VP, PMMA, and the BCPs used here) showed that the polymers do not heat to any significant extent on their own with microwave irradiation of this frequency (2.45 GHz) and power (∼600 W). It was demonstrated that BCP annealing can be effectively carried out in 60 s on non-microwave-responsive substrates, such as highly doped silicon, indium tin oxide (ITO)-coated glass, glass, and Kapton, by placing a piece of high resistivity silicon wafer in contact with the sample-in this configuration, the silicon wafer is termed the heating element. Annealing and self-assembly of polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP) and polystyrene-block-poly(methyl methacrylate) (PS

  16. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  17. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  18. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  19. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  1. Suspected-target pesticide screening using gas chromatography-quadrupole time-of-flight mass spectrometry with high resolution deconvolution and retention index/mass spectrum library.

    Science.gov (United States)

    Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong

    2014-10-01

    A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues

  2. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  3. Related Drupal Nodes Block

    NARCIS (Netherlands)

    Van der Vegt, Wim

    2010-01-01

    Related Drupal Nodes Block This module exposes a block that uses Latent Semantic Analysis (Lsa) internally to suggest three nodes that are relevant to the node a user is viewing. This module performs three tasks. 1) It periodically indexes a Drupal site and generates a Lsa Term Document Matrix.

  4. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  5. Entanglement in Gaussian matrix-product states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Ericsson, Marie

    2006-01-01

    Gaussian matrix-product states are obtained as the outputs of projection operations from an ancillary space of M infinitely entangled bonds connecting neighboring sites, applied at each of N sites of a harmonic chain. Replacing the projections by associated Gaussian states, the building blocks, we show that the entanglement range in translationally invariant Gaussian matrix-product states depends on how entangled the building blocks are. In particular, infinite entanglement in the building blocks produces fully symmetric Gaussian states with maximum entanglement range. From their peculiar properties of entanglement sharing, a basic difference with spin chains is revealed: Gaussian matrix-product states can possess unlimited, long-range entanglement even with minimum number of ancillary bonds (M=1). Finally we discuss how these states can be experimentally engineered from N copies of a three-mode building block and N two-mode finitely squeezed states

  6. Sparse-matrix factorizations for fast symmetric Fourier transforms

    International Nuclear Information System (INIS)

    Sequel, J.

    1987-01-01

    This work proposes new fast algorithms computing the discrete Fourier transform of certain families of symmetric sequences. Sequences commonly found in problems of structure determination by x-ray crystallography and in numerical solutions of boundary-value problems in partial differential equations are dealt with. In the algorithms presented, the redundancies in the input and output data, due to the presence of symmetries in the input data sequence, were eliminated. Using ring-theoretical methods a matrix representation is obtained for the remaining calculations; which factors as the product of a complex block-diagonal matrix times as integral matrix. A basic two-step algorithm scheme arises from this factorization with a first step consisting of pre-additions and a second step containing the calculations involved in computing with the blocks in the block-diagonal factor. These blocks are structured as block-Hankel matrices, and two sparse-matrix factoring formulas are developed in order to diminish their arithmetic complexity

  7. Chaotic Image Encryption Algorithm Based on Circulant Operation

    Directory of Open Access Journals (Sweden)

    Xiaoling Huang

    2013-01-01

    Full Text Available A novel chaotic image encryption scheme based on the time-delay Lorenz system is presented in this paper with the description of Circulant matrix. Making use of the chaotic sequence generated by the time-delay Lorenz system, the pixel permutation is carried out in diagonal and antidiagonal directions according to the first and second components. Then, a pseudorandom chaotic sequence is generated again from time-delay Lorenz system using all components. Modular operation is further employed for diffusion by blocks, in which the control parameter is generated depending on the plain-image. Numerical experiments show that the proposed scheme possesses the properties of a large key space to resist brute-force attack, sensitive dependence on secret keys, uniform distribution of gray values in the cipher-image, and zero correlation between two adjacent cipher-image pixels. Therefore, it can be adopted as an effective and fast image encryption algorithm.

  8. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  9. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  10. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  11. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  12. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  13. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  14. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  15. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  16. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  17. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  18. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  19. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  20. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  1. Porting of the DBCSR library for Sparse Matrix-Matrix Multiplications to Intel Xeon Phi systems

    OpenAIRE

    Bethune, Iain; Gloess, Andeas; Hutter, Juerg; Lazzaro, Alfio; Pabst, Hans; Reid, Fiona

    2017-01-01

    Multiplication of two sparse matrices is a key operation in the simulation of the electronic structure of systems containing thousands of atoms and electrons. The highly optimized sparse linear algebra library DBCSR (Distributed Block Compressed Sparse Row) has been specifically designed to efficiently perform such sparse matrix-matrix multiplications. This library is the basic building block for linear scaling electronic structure theory and low scaling correlated methods in CP2K. It is para...

  2. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  3. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  4. Multi-block methods in multivariate process control

    DEFF Research Database (Denmark)

    Kohonen, J.; Reinikainen, S.P.; Aaljoki, K.

    2008-01-01

    methods the effect of a sub-process can be seen and an example with two blocks, near infra-red, NIR, and process data, is shown. The results show improvements in modelling task, when a MB-based approach is used. This way of working with data gives more information on the process than if all data...... are in one X-matrix. The procedure is demonstrated by an industrial continuous process, where knowledge about the sub-processes is available and X-matrix can be divided into blocks between process variables and NIR spectra.......In chemometric studies all predictor variables are usually collected in one data matrix X. This matrix is then analyzed by PLS regression or other methods. When data from several different sub-processes are collected in one matrix, there is a possibility that the effects of some sub-processes may...

  5. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  6. Isomorphic Operators and Functional Equations for the Skew-Circulant Algebra

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available The skew-circulant matrix has been used in solving ordinary differential equations. We prove that the set of skew-circulants with complex entries has an idempotent basis. On that basis, a skew-cyclic group of automorphisms and functional equations on the skew-circulant algebra is introduced. And different operators on linear vector space that are isomorphic to the algebra of n×n complex skew-circulant matrices are displayed in this paper.

  7. Incomplete block factorization preconditioning for indefinite elliptic problems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Chun-Hua [Univ. of Calgary, Alberta (Canada)

    1996-12-31

    The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small. And this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given.

  8. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  9. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  10. Bandwidth Optimization of Normal Equation Matrix in Bundle Block Adjustment in Multi-baseline Rotational Photography

    Directory of Open Access Journals (Sweden)

    WANG Xiang

    2016-02-01

    Full Text Available A new bandwidth optimization method of normal equation matrix in bundle block adjustment in multi-baseline rotational close range photography by image index re-sorting is proposed. The equivalent exposure station of each image is calculated by its object space coverage and the relationship with other adjacent images. Then, according to the coordinate relations between equivalent exposure stations, new logical indices of all images are computed, based on which, the optimized bandwidth value can be obtained. Experimental results show that the bandwidth determined by our proposed method is significantly better than its original value, thus the operational efficiency, as well as the memory consumption of multi-baseline rotational close range photography in real-data applications, is optimized to a certain extent.

  11. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  12. Tracking juniper berry content in oils and distillates by spectral deconvolution of gas chromatography/mass spectrometry data.

    Science.gov (United States)

    Robbat, Albert; Kowalsick, Amanda; Howell, Jessalin

    2011-08-12

    The complex nature of botanicals and essential oils makes it difficult to identify all of the constituents by gas chromatography/mass spectrometry (GC/MS) alone. In this paper, automated sequential, multidimensional gas chromatography/mass spectrometry (GC-GC/MS) was used to obtain a matrix-specific, retention time/mass spectrometry library of 190 juniper berry oil compounds. GC/MS analysis on stationary phases with different polarities confirmed the identities of each compound when spectral deconvolution software was used to analyze the oil. Also analyzed were distillates of juniper berry and its oil as well as gin from four different manufacturers. Findings showed the chemical content of juniper berry can be traced from starting material to final product and can be used to authenticate and differentiate brands. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. NANOSTRUCTURED METAL OXIDE CATALYSTS VIA BUILDING BLOCK SYNTHESES

    Energy Technology Data Exchange (ETDEWEB)

    Craig E. Barnes

    2013-03-05

    A broadly applicable methodology has been developed to prepare new single site catalysts on silica supports. This methodology requires of three critical components: a rigid building block that will be the main structural and compositional component of the support matrix; a family of linking reagents that will be used to insert active metals into the matrix as well as cross link building blocks into a three dimensional matrix; and a clean coupling reaction that will connect building blocks and linking agents together in a controlled fashion. The final piece of conceptual strategy at the center of this methodology involves dosing the building block with known amounts of linking agents so that the targeted connectivity of a linking center to surrounding building blocks is obtained. Achieving targeted connectivities around catalytically active metals in these building block matrices is a critical element of the strategy by which single site catalysts are obtained. This methodology has been demonstrated with a model system involving only silicon and then with two metal-containing systems (titanium and vanadium). The effect that connectivity has on the reactivity of atomically dispersed titanium sites in silica building block matrices has been investigated in the selective oxidation of phenols to benezoquinones. 2-connected titanium sites are found to be five times as active (i.e. initial turnover frequencies) than 4-connected titanium sites (i.e. framework titanium sites).

  14. Detecting the Spectrum of the Atlantic's Thermo-haline Circulation: Deconvolved Climate Proxies Show How Polar Climates Communicate

    Science.gov (United States)

    Reischmann, Elizabeth; Yang, Xiao; Rial, José

    2014-05-01

    Deconvolution is widely used in a wide variety of scientific fields, including its significant use in seismology, as a tool to recover real input from a system's impulse response and output. Our research uses spectral division deconvolution in the context of studying the impulse response of the possible relationship between the nonlinear climates of the Polar Regions by using select δ18O ice cores from both poles. This is feasible in spite of the fact that the records may be the result of nonlinear processes because the two polar climates are synchronized for the period studied, forming a Hilbert transform pair. In order to perform this analysis, the age models of three Greenland and four Antarctica records have been matched using a Monte Carlo method with the methane-matched pair GRIP and BYRD as a basis of calculations. For all of the twelve resulting pairs, various deconvolutions schemes (Weiner, Damped Least Squares, Tikhonov, Truncated Singular Value Decomposition) give consistent, quasi-periodic, impulse responses of the system. Multitaper analysis then demonstrates strong, millennia scale, quasi-periodic oscillations in these system responses with a range of 2,500 to 1,000 years. However, these results are directionally dependent, with the transfer function from north to south differing from that of south north. High amplitude power peaks at 5,000 to 1,7000 years characterize the former, while the latter contains peaks at 2,500 to 1,700 years. These predominant periodicities are also found in the data, some of which have been identified as solar forcing, but others of which may indicate internal oscillations of the climate system (1.6-1.4ky). The approximately 1,500 year period transfer function, which does not have a corresponding solar forcing, may indicate one of these internal periodicities of the system, perhaps even indicating the long-term presence of the Deep Water circulation, also known as the thermo-haline circulation (THC). Simplified models of

  15. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    Science.gov (United States)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  16. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  17. Block fuel element for gas-cooled high temperature reactors

    International Nuclear Information System (INIS)

    Hrovat, M.F.

    1978-01-01

    The invention concerns a block fuel element consisting of only one carbon matrix which is almost isotropic of high crystallinity into which the coated particles are incorporated by a pressing process. This block element is produced under isostatic pressure from graphite matrix powder and coated particles in a rubber die and is subsequently subjected to heat treatment. The main component of the graphite matrix powder consists of natural graphite powder to which artificial graphite powder and a small amount of a phenol resin binding agent are added

  18. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  19. Quantized Matrix Algebras and Quantum Seeds

    DEFF Research Database (Denmark)

    Jakobsen, Hans Plesner; Pagani, Chiara

    2015-01-01

    We determine explicitly quantum seeds for classes of quantized matrix algebras. Furthermore, we obtain results on centres and block diagonal forms of these algebras. In the case where is an arbitrary root of unity, this further determines the degrees.......We determine explicitly quantum seeds for classes of quantized matrix algebras. Furthermore, we obtain results on centres and block diagonal forms of these algebras. In the case where is an arbitrary root of unity, this further determines the degrees....

  20. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  1. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  2. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  3. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  4. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  5. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  6. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  7. Synthesis and morphology of hydroxyapatite/polyethylene oxide nanocomposites with block copolymer compatibilized interfaces

    Science.gov (United States)

    Lee, Ji Hoon; Shofner, Meisha

    2012-02-01

    In order to exploit the promise of polymer nanocomposites, special consideration should be given to component interfaces during synthesis and processing. Previous results from this group have shown that nanoparticles clustered into larger structures consistent with their native shape when the polymer matrix crystallinity was high. Therefore in this research, the nanoparticles are disguised from a highly-crystalline polymer matrix by cloaking them with a matrix-compatible block copolymer. Specifically, spherical and needle-shaped hydroxyapatite nanoparticles were synthesized using a block copolymer templating method. The block copolymer used, polyethylene oxide-b-polymethacrylic acid, remained on the nanoparticle surface following synthesis with the polyethylene oxide block exposed. These nanoparticles were subsequently added to a polyethylene oxide matrix using solution processing. Characterization of the nanocomposites indicated that the copolymer coating prevented the nanoparticles from assembling into ordered clusters and that the matrix crystallinity was decreased at a nanoparticle spacing of approximately 100 nm.

  8. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  9. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  10. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  11. Matrix formulation of pebble circulation in the pebbed code

    International Nuclear Information System (INIS)

    Gougar, H.D.; Terry, W.K.; Ougouag, A.M.

    2002-01-01

    The PEBBED technique provides a foundation for equilibrium fuel cycle analysis and optimization in pebble-bed cores in which the fuel elements are continuously flowing and, if desired, recirculating. In addition to the modern analysis techniques used in or being developed for the code, PEBBED incorporates a novel nuclide-mixing algorithm that allows for sophisticated recirculation patterns using a matrix generated from basic core parameters. Derived from a simple partitioning of the pebble flow, the elements of the recirculation matrix are used to compute the spatially averaged density of each nuclide at the entry plane from the nuclide densities of pebbles emerging from the discharge conus. The order of the recirculation matrix is a function of the flexibility and sophistication of the fuel handling mechanism. This formulation for coupling pebble flow and neutronics enables core design and fuel cycle optimization to be performed by the manipulation of a few key core parameters. The formulation is amenable to modern optimization techniques. (author)

  12. Computation of the q -th roots of circulant matrices

    Directory of Open Access Journals (Sweden)

    Pakizeh Mohammadi Khanghah

    2014-05-01

    Full Text Available In this paper‎, ‎we investigate the reduced form of circulant matrices‎ ‎and we show that the problem of computing the $q$-th roots of a‎ ‎nonsingular circulant matrix $A$ can be reduced to that of computing‎ ‎the $q$-th roots of two half size matrices $B-C$ and $B+C$. 

  13. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  14. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  15. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  16. Drug Delivery and Transport into the Central Circulation: An Example of Zero-Order In vivo Absorption of Rotigotine from a Transdermal Patch Formulation.

    Science.gov (United States)

    Cawello, Willi; Braun, Marina; Andreas, Jens-Otto

    2018-01-13

    Pharmacokinetic studies using deconvolution methods and non-compartmental analysis to model clinical absorption of drugs are not well represented in the literature. The purpose of this research was (1) to define the system of equations for description of rotigotine (a dopamine receptor agonist delivered via a transdermal patch) absorption based on a pharmacokinetic model and (2) to describe the kinetics of rotigotine disposition after single and multiple dosing. The kinetics of drug disposition was evaluated based on rotigotine plasma concentration data from three phase 1 trials. In two trials, rotigotine was administered via a single patch over 24 h in healthy subjects. In a third trial, rotigotine was administered once daily over 1 month in subjects with early-stage Parkinson's disease (PD). A pharmacokinetic model utilizing deconvolution methods was developed to describe the relationship between drug release from the patch and plasma concentrations. Plasma-concentration over time profiles were modeled based on a one-compartment model with a time lag, a zero-order input (describing a constant absorption via skin into central circulation) and first-order elimination. Corresponding mathematical models for single- and multiple-dose administration were developed. After single-dose administration of rotigotine patches (using 2, 4 or 8 mg/day) in healthy subjects, a constant in vivo absorption was present after a minor time lag (2-3 h). On days 27 and 30 of the multiple-dose study in patients with PD, absorption was constant during patch-on periods and resembled zero-order kinetics. Deconvolution based on rotigotine pharmacokinetic profiles after single- or multiple-dose administration of the once-daily patch demonstrated that in vivo absorption of rotigotine showed constant input through the skin into the central circulation (resembling zero-order kinetics). Continuous absorption through the skin is a basis for stable drug exposure.

  17. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  18. An iterative method to invert the LTSn matrix

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, A.V.; Vilhena, M.T. de [UFRGS, Porto Alegre (Brazil)

    1996-12-31

    Recently Vilhena and Barichello proposed the LTSn method to solve, analytically, the Discrete Ordinates Problem (Sn problem) in transport theory. The main feature of this method consist in the application of the Laplace transform to the set of Sn equations and solve the resulting algebraic system for the transport flux. Barichello solve the linear system containing the parameter s applying the definition of matrix invertion exploiting the structure of the LTSn matrix. In this work, it is proposed a new scheme to invert the LTSn matrix, decomposing it in blocks and recursively inverting this blocks.

  19. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  20. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  1. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  2. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  3. Harmony of spinning conformal blocks

    Energy Technology Data Exchange (ETDEWEB)

    Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Sobko, Evgeny [Stockholm Univ. (Sweden); Nordita, Stockholm (Sweden); Isachenkov, Mikhail [Weizmann Institute of Science, Rehovoth (Israel). Dept. of Particle Physics and Astrophysics

    2016-12-07

    Conformal blocks for correlation functions of tensor operators play an increasingly important role for the conformal bootstrap programme. We develop a universal approach to such spinning blocks through the harmonic analysis of certain bundles over a coset of the conformal group. The resulting Casimir equations are given by a matrix version of the Calogero-Sutherland Hamiltonian that describes the scattering of interacting spinning particles in a 1-dimensional external potential. The approach is illustrated in several examples including fermionic seed blocks in 3D CFT where they take a very simple form.

  4. Harmony of spinning conformal blocks

    Energy Technology Data Exchange (ETDEWEB)

    Schomerus, Volker [DESY Hamburg, Theory Group,Notkestraße 85, 22607 Hamburg (Germany); Sobko, Evgeny [Nordita and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Isachenkov, Mikhail [Department of Particle Physics and Astrophysics, Weizmann Institute of Science,Rehovot 7610001 (Israel)

    2017-03-15

    Conformal blocks for correlation functions of tensor operators play an increasingly important role for the conformal bootstrap programme. We develop a universal approach to such spinning blocks through the harmonic analysis of certain bundles over a coset of the conformal group. The resulting Casimir equations are given by a matrix version of the Calogero-Sutherland Hamiltonian that describes the scattering of interacting spinning particles in a 1-dimensional external potential. The approach is illustrated in several examples including fermionic seed blocks in 3D CFT where they take a very simple form.

  5. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  6. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  7. Block Tridiagonal Matrices in Electronic Structure Calculations

    DEFF Research Database (Denmark)

    Petersen, Dan Erik

    in the Landauer–Büttiker ballistic transport regime. These calculations concentrate on determining the so– called Green’s function matrix, or portions thereof, which is the inverse of a block tridiagonal general complex matrix. To this end, a sequential algorithm based on Gaussian elimination named Sweeps...

  8. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone

    2010-11-26

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone; Stoll, Martin

    2010-01-01

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  10. A comparison of lower bounds for the symmetric circulant traveling salesman problem

    NARCIS (Netherlands)

    de Klerk, E.; Dobre, C.

    2011-01-01

    When the matrix of distances between cities is symmetric and circulant, the traveling salesman problem (TSP) reduces to the so-called symmetric circulant traveling salesman problem (SCTSP), that has applications in the design of reconfigurable networks, and in minimizing wallpaper waste. The

  11. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  12. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  13. A Perron–Frobenius theory for block matrices associated to a multiplex network

    International Nuclear Information System (INIS)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-01-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers

  14. A Perron-Frobenius theory for block matrices associated to a multiplex network

    Science.gov (United States)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-03-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers.

  15. Pioglitazone Attenuates Drug-Eluting Stent-Induced Proinflammatory State in Patients by Blocking Ubiquitination of PPAR

    Directory of Open Access Journals (Sweden)

    Zhongxia Wang

    2016-01-01

    Full Text Available The inflammatory response after polymer-based drug-eluting stent (DES placement has recently emerged as a major concern. The biologic roles of peroxisome proliferator-activated receptor-γ (PPAR-γ activators thiazolidinedione (TZD remain controversial in cardiovascular disease. Herein, we investigated the antiinflammatory effects of pioglitazone (PIO on circulating peripheral blood mononuclear cells (MNCs in patients after coronary DES implantation. Methods and Results. Twenty-eight patients with coronary artery disease and who underwent DES implantations were randomly assigned to pioglitazone (30 mg/d; PIO or placebo (control; Con treatment in addition to optimal standard therapy. After 12 weeks of treatment, plasma concentrations of high-sensitivity C-reactive protein (hs-CRP, interleukin-6 (IL-6, tumor necrosis factor-α (TNF-α, and matrix metalloproteinase-9 (MMP-9 were significantly decreased in PIO group compared to the Con group (P=0.035, 0.011, 0.008, and 0.012, resp.. DES-induced mRNA expressions of IL-6, TNF-α, and MMP-9 in circulating MNC were significantly blocked by PIO (P=0.031, 0.012, and 0.007, resp.. In addition, PIO markedly inhibited DES-enhanced NF-κB function and DES-blocked PPAR-γ activity. Mechanically, DES induced PPAR-γ ubiquitination and degradation in protein level, which can be totally reversed by PIO. Conclusion. PIO treatment attenuated DES-induced PPAR loss, NF-κB activation, and proinflammation, indicating that PIO may have a novel direct protective role in modulating proinflammation in DES era.

  16. Evaluation of Isoprene Chain Extension from PEO Macromolecular Chain Transfer Agents for the Preparation of Dual, Invertible Block Copolymer Nanoassemblies.

    Science.gov (United States)

    Bartels, Jeremy W; Cauët, Solène I; Billings, Peter L; Lin, Lily Yun; Zhu, Jiahua; Fidge, Christopher; Pochan, Darrin J; Wooley, Karen L

    2010-09-14

    Two RAFT-capable PEO macro-CTAs, 2 and 5 kDa, were prepared and used for the polymerization of isoprene which yielded well-defined block copolymers of varied lengths and compositions. GPC analysis of the PEO macro-CTAs and block copolymers showed remaining unreacted PEO macro-CTA. Mathematical deconvolution of the GPC chromatograms allowed for the estimation of the blocking efficiency, about 50% for the 5 kDa PEO macro-CTA and 64% for the 2 kDa CTA. Self assembly of the block copolymers in both water and decane was investigated and the resulting regular and inverse assemblies, respectively, were analyzed with DLS, AFM, and TEM to ascertain their dimensions and properties. Assembly of PEO-b-PIp block copolymers in aqueous solution resulted in well-defined micelles of varying sizes while the assembly in hydrophobic, organic solvent resulted in the formation of different morphologies including large aggregates and well-defined cylindrical and spherical structures.

  17. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  18. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  19. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  20. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  1. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  2. Surgical myocardial revascularization without extracorporeal circulation

    Directory of Open Access Journals (Sweden)

    Salomón Soriano Ordinola Rojas

    2003-05-01

    Full Text Available OBJECTIVE: To assess the immediate postoperative period of patients undergoing myocardial revascularization without extracorporeal circulation with different types of grafts. METHODS: One hundred and twelve patients, 89 (79.5% of whom were males, were revascularized without extracorporeal circulation. Their ages ranged from 39 to 85 years. The criteria for indicating myocardial revascularization without extracorporeal circulation were as follows: revascularized coronary artery caliber > 1.5 mm, lack of intramyocardial trajectory on coronary angiography, noncalcified coronary arteries, and tolerance of the heart to the different rotation maneuvers. RESULTS: Myocardial revascularization without extracorporeal circulation was performed in 112 patients. Three were converted to extracorporeal circulation, which required a longer hospital stay but did not impact mortality. During the procedure, the following events were observed: atrial fibrillation in 10 patients, ventricular fibrillation in 4, total transient atrioventricular block in 2, ventricular extrasystoles in 58, use of a device to retrieve red blood cells in 53, blood transfusion in 8, and arterial hypotension in 89 patients. Coronary angiography was performed in 20 patients on the seventh postoperative day when the grafts were patent. CONCLUSION: Myocardial revascularization without extracorporeal circulation is a reproducible technique that is an alternative for treating ischemic heart disease.

  3. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  4. Non-perturbative topological strings and conformal blocks

    NARCIS (Netherlands)

    Cheng, M.C.N.; Dijkgraaf, R.; Vafa, C.

    2011-01-01

    We give a non-perturbative completion of a class of closed topological string theories in terms of building blocks of dual open strings. In the specific case where the open string is given by a matrix model these blocks correspond to a choice of integration contour. We then apply this definition to

  5. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  6. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  7. A block variant of the GMRES method on massively parallel processors

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guangye [Cray Research, Inc., Eagan, MN (United States)

    1996-12-31

    This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.

  8. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  9. Identification of p63+ keratinocyte progenitor cells in circulation and their matrix-directed differentiation to epithelial cells.

    Science.gov (United States)

    Nair, Renjith P; Krishnan, Lissy K

    2013-04-11

    dermal fibroblast monolayer or fibrin supported cell proliferation and showed typical hexagonal morphology of keratinocytes within 15 days. Circulating KPCs were identified with p63, which differentiated into keratinocytes with expression of the cytokeratins, involucrin and filaggrin. Components of the specifically designed matrix favored KPC attachment, directed differentiation, and may turn out to be a potential vehicle for cell transplantation.

  10. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  11. Optimal block-tridiagonalization of matrices for coherent charge transport

    International Nuclear Information System (INIS)

    Wimmer, Michael; Richter, Klaus

    2009-01-01

    Numerical quantum transport calculations are commonly based on a tight-binding formulation. A wide class of quantum transport algorithms require the tight-binding Hamiltonian to be in the form of a block-tridiagonal matrix. Here, we develop a matrix reordering algorithm based on graph partitioning techniques that yields the optimal block-tridiagonal form for quantum transport. The reordered Hamiltonian can lead to significant performance gains in transport calculations, and allows to apply conventional two-terminal algorithms to arbitrarily complex geometries, including multi-terminal structures. The block-tridiagonalization algorithm can thus be the foundation for a generic quantum transport code, applicable to arbitrary tight-binding systems. We demonstrate the power of this approach by applying the block-tridiagonalization algorithm together with the recursive Green's function algorithm to various examples of mesoscopic transport in two-dimensional electron gases in semiconductors and graphene.

  12. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  13. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  14. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  15. Application of multi-block methods in cement production

    DEFF Research Database (Denmark)

    Svinning, K.; Høskuldsson, Agnar

    2008-01-01

    distribution and the two last blocks the superficial microstructure analysed by differential thermo gravimetric analysis. The multi-block method is used to identify the role of each part. The score vectors of each block can be analysed separately or together with score vectors of other blocks. Stepwise......Compressive strength at 1 day of Portland cement as a function of the microstructure of cement was statistically modelled by application of multi-block regression method. The observation X-matrix was partitioned into four blocks, the first block representing the mineralogy, the second particle size...... regression is used to find minimum number of variables of each block. The multi-block method proved useful in determining the modelling strength of each data block and finding minimum number of variables within each data block....

  16. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  17. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  18. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  19. Effects of Resistance Training on Matrix Metalloproteinase Activity in Skeletal Muscles and Blood Circulation During Aging

    Directory of Open Access Journals (Sweden)

    Ivo V. de Sousa Neto

    2018-03-01

    Full Text Available Aging is a complex, multifactorial process characterized by the accumulation of deleterious effects, including biochemical adaptations of the extracellular matrix (ECM. The purpose of this study was to investigate the effects of 12 weeks of resistance training (RT on metalloproteinase 2 (MMP-2 activity in skeletal muscles and, MMP-2 and MMP-9 activity in the blood circulation of young and old rats. Twenty-eight Wistar rats were randomly divided into four groups (n = 7 per group: young sedentary (YS; young trained (YT, old sedentary (OS, and old trained (OT. The stair climbing RT consisted of one training session every 2 other day, with 8–12 dynamic movements per climb. The animals were euthanized 48 h after the end of the experimental period. MMP-2 and MMP-9 activity was measured by zymography. There was higher active MMP-2 activity in the lateral gastrocnemius and flexor digitorum profundus muscles in the OT group when compared to the OS, YS, and YT groups (p ≤ 0.001. Moreover, there was higher active MMP-2 activity in the medial gastrocnemius muscle in the OT group when compared to the YS and YT groups (p ≤ 0.001. The YS group presented lower active MMP-2 activity in the soleus muscle than the YT, OS, OT groups (p ≤ 0.001. With respect to active MMP-2/9 activity in the bloodstream, the OT group displayed significantly reduced activity (p ≤ 0.001 when compared to YS and YT groups. In conclusion, RT up-regulates MMP-2 activity in aging muscles, while down-regulating MMP-2 and MMP-9 in the blood circulation, suggesting that it may be a useful tool for the maintenance of ECM remodeling.

  20. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  1. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  2. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  3. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  4. Hierarchical matrix techniques for the solution of elliptic equations

    KAUST Repository

    Chávez, Gustavo

    2014-05-04

    Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major applications of this technology, but they can be applied into other areas where low-rank properties exist. Such is the case of the Block Cyclic Reduction algorithm, which is used as a direct solver for the constant-coefficient Poisson quation. We explore the variable-coefficient case, also using Block Cyclic reduction, with the addition of Hierarchical Matrices to represent matrix blocks, hence improving the otherwise O(N2) algorithm, into an efficient O(N) algorithm.

  5. Invertibility and Explicit Inverses of Circulant-Type Matrices with k-Fibonacci and k-Lucas Numbers

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available Circulant matrices have important applications in solving ordinary differential equations. In this paper, we consider circulant-type matrices with the k-Fibonacci and k-Lucas numbers. We discuss the invertibility of these circulant matrices and present the explicit determinant and inverse matrix by constructing the transformation matrices, which generalizes the results in Shen et al. (2011.

  6. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  7. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  8. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  9. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  10. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    International Nuclear Information System (INIS)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.; Hamburg Univ.

    1981-01-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and 47 Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. 47 Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of 47 Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P 47 Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations. (orig.)

  11. Supercritical water natural circulation flow stability experiment research

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Dongliang; Zhou, Tao; Li, Bing [North China Electric Power Univ., Beijing (China). School of Nuclear Science and Engineering; North China Electric Power Univ., Beijing (China). Inst. of Nuclear Thermalhydraulic Safety and Standardization; North China Electric Power Univ., Beijing (China). Beijing Key Lab. of Passive Safety Technology for Nuclear Energy; Huang, Yanping [Nuclear Power Institute of China, Chengdu (China). Science and Technology on Reactor System Design Technology Lab.

    2017-12-15

    The Thermal hydraulic characteristics of supercritical water natural circulation plays an important role in the safety of the Generation-IV supercritical water-cooled reactors. Hence it is crucial to conduct the natural circulation heat transfer experiment of supercritical water. The heat transfer characteristics have been studied under different system pressures in the natural circulation systems. Results show that the fluctuations in the subcritical flow rate (for natural circulation) is relatively small, as compared to the supercritical flow rate. By increasing the heating power, it is observed that the amplitude (and time period) of the fluctuation tends to become larger for the natural circulation of supercritical water. This tends to show the presence of flow instability in the supercritical water. It is possible to observe the flow instability phenomenon when the system pressure is suddenly reduced from the supercritical pressure state to the subcritical state. At the test outlet section, the temperature is prone to increase suddenly, whereas the blocking effect may be observed in the inlet section of the experiment.

  12. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  13. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  14. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  15. Effects of different block size distributions in pressure transient response of naturally fractured reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Montazeri, G.H. [Islamic Azad University, Mahshahr (Iran, Islamic Republic of). Dept. of Chemical and Petroleum Engineering], E-mail: montazeri_gh@yahoo.com; Tahami, S.A. [Mad Daneshgostar Tabnak Co. (MDT),Tehran (Iran, Islamic Republic of); Moradi, B.; Safari, E. [Iranian Central Oil Fields Co, Tehran (Iran, Islamic Republic of)], E-mail: morady.babak@gmail.com

    2011-07-15

    This paper presents a model for pressure transient and derivative analysis for naturally fractured reservoirs by a formulation of inter porosity flow incorporating variations in matrix block size, which is inversely related to fracture intensity. Geologically realistic Probability Density Functions (PDFs) of matrix block size, such as uniform, bimodal, linear and exponential distributions, are examined and pseudo-steady-state and transient models for inter porosity flow are assumed. The results have been physically interpreted, and, despite results obtained by other authors, it was found that the shape of pressure derivative curves for different PDFs are basically identical within some ranges of block size variability, inter porosity skin, PDFs parameters and matrix storage capacity. This tool can give an insight on the distribution of block sizes and shapes, together with other sources of information such as Logs and geological observations. (author)

  16. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  17. An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2014-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method, breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM algorithm has to handle extra...... irregularity from three aspects: (1) the number of the nonzero entries in the result sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the result sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input....... Load balancing builds on the number of the necessary arithmetic operations on the nonzero entries and is guaranteed in all stages. Compared with the state-of-the-art GPU SpGEMM methods in the CUSPARSE library and the CUSP library and the latest CPU SpGEMM method in the Intel Math Kernel Library, our...

  18. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  19. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  20. Endocytosis of collagen by hepatic stellate cells regulates extracellular matrix dynamics.

    Science.gov (United States)

    Bi, Yan; Mukhopadhyay, Dhriti; Drinane, Mary; Ji, Baoan; Li, Xing; Cao, Sheng; Shah, Vijay H

    2014-10-01

    Hepatic stellate cells (HSCs) generate matrix, which in turn may also regulate HSCs function during liver fibrosis. We hypothesized that HSCs may endocytose matrix proteins to sense and respond to changes in microenvironment. Primary human HSCs, LX2, or mouse embryonic fibroblasts (MEFs) [wild-type; c-abl(-/-); or Yes, Src, and Fyn knockout mice (YSF(-/-))] were incubated with fluorescent-labeled collagen or gelatin. Fluorescence-activated cell sorting analysis and confocal microscopy were used for measuring cellular internalization of matrix proteins. Targeted PCR array and quantitative real-time PCR were used to evaluate gene expression changes. HSCs and LX2 cells endocytose collagens in a concentration- and time-dependent manner. Endocytosed collagen colocalized with Dextran 10K, a marker of macropinocytosis, and 5-ethylisopropyl amiloride, an inhibitor of macropinocytosis, reduced collagen internalization by 46%. Cytochalasin D and ML7 blocked collagen internalization by 47% and 45%, respectively, indicating that actin and myosin are critical for collagen endocytosis. Wortmannin and AKT inhibitor blocked collagen internalization by 70% and 89%, respectively, indicating that matrix macropinocytosis requires phosphoinositide-3-kinase (PI3K)/AKT signaling. Overexpression of dominant-negative dynamin-2 K44A blocked matrix internalization by 77%, indicating a role for dynamin-2 in matrix macropinocytosis. Whereas c-abl(-/-) MEF showed impaired matrix endocytosis, YSF(-/-) MEF surprisingly showed increased matrix endocytosis. It was also associated with complex gene regulations that related with matrix dynamics, including increased matrix metalloproteinase 9 (MMP-9) mRNA levels and zymographic activity. HSCs endocytose matrix proteins through macropinocytosis that requires a signaling network composed of PI3K/AKT, dynamin-2, and c-abl. Interaction with extracellular matrix regulates matrix dynamics through modulating multiple gene expressions including MMP-9

  1. The Use of Treatment Concurrences to Assess Robustness of Binary Block Designs Against the Loss of Whole Blocks

    OpenAIRE

    Godolphin, JD; Godolphin, EJ

    2015-01-01

    © 2015 Australian Statistical Publishing Association Inc. Criteria are proposed for assessing the robustness of a binary block design against the loss of whole blocks, based on summing entries of selected upper non-principal sections of the concurrence matrix. These criteria improve on the minimal concurrence concept that has been used previously and provide new conditions for measuring the robustness status of a design. The robustness properties of two-associate partially balanced designs ar...

  2. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  3. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  4. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  5. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  6. A Blocking Criterion for Self-Compacting Concrete

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Stang, Henrik; Geiker, Mette Rica

    2005-01-01

    To benefit from the full potential of Self-Compacting Concrete (SCC) prediction tools for the form filling ability of SCC are needed. This paper presents a theoretical concept for assessment of the blocking resistance of SCC. A critical concrete flow rate above which no blocking occurs...... is introduced. The critical flow rate takes into account the mix design, the rheological properties of the matrix and concrete, and the geometry of the flow domain....

  7. A framework for general sparse matrix-matrix multiplication on GPUs and heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle...... extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data...... memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary...

  8. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  9. CSF circulation in subjects with the empty sella syndrome

    International Nuclear Information System (INIS)

    Brismar, K.; Bergstrand, G.

    1981-01-01

    In the present study the CSF circulation was analyzed in 48 subjects with ESS with gamma cisternography, pneumoencephalography (PEG) und computed tomography (CT). In 80% of the subjects the CSF circulation was retarded with convexity block which was combined with widened CSF transport pathways and basal cisterns. These findings were correlated with the clinical signs and symptoms. Headache, psychiatric symptoms, visual field defects and obesity, however, were not related to the impaired CSF circulation. It is concluded that impaired CSF dynamics leading to intermittent increase of ICP has a major impact on the development of the ESS and that most of the patients' complaints are related to this disturbance. Thus is is important to obtain information of the CSF dynamics concurrent with the diagnosis of ESS. For this purpose PEG or CT may be used as the first examination. Moreover, the patient should be examined at least every second year for symptoms and signs of progressive impairments of the CSF circulation. (orig./MG)

  10. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  11. Entanglement property in matrix product spin systems

    International Nuclear Information System (INIS)

    Zhu Jingmin

    2012-01-01

    We study the entanglement property in matrix product spin-ring systems systemically by von Neumann entropy. We find that: (i) the Hilbert space dimension of one spin determines the upper limit of the maximal value of the entanglement entropy of one spin, while for multiparticle entanglement entropy, the upper limit of the maximal value depends on the dimension of the representation matrices. Based on the theory, we can realize the maximum of the entanglement entropy of any spin block by choosing the appropriate control parameter values. (ii) When the entanglement entropy of one spin takes its maximal value, the entanglement entropy of an asymptotically large spin block, i.e. the renormalization group fixed point, is not likely to take its maximal value, and so only the entanglement entropy S n of a spin block that varies with size n can fully characterize the spin-ring entanglement feature. Finally, we give the entanglement dynamics, i.e. the Hamiltonian of the matrix product system. (author)

  12. TLD-100 glow-curve deconvolution for the evaluation of the thermal stress and radiation damage effects

    CERN Document Server

    Sabini, M G; Cuttone, G; Guasti, A; Mazzocchi, S; Raffaele, L

    2002-01-01

    In this work, the dose response of TLD-100 dosimeters has been studied in a 62 MeV clinical proton beams. The signal versus dose curve has been compared with the one measured in a sup 6 sup 0 Co beam. Different experiments have been performed in order to observe the thermal stress and the radiation damage effects on the detector sensitivity. A LET dependence of the TL response has been observed. In order to get a physical interpretation of these effects, a computerised glow-curve deconvolution has been employed. The results of all the performed experiments and deconvolutions are extensively reported, and the TLD-100 possible fields of application in the clinical proton dosimetry are discussed.

  13. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  14. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  15. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  16. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  17. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  18. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  19. Matrix Management in Practice in Access Services at the NCSU Libraries

    Science.gov (United States)

    Harris, Colleen S.

    2010-01-01

    The former Associate Head of Access and Delivery Services of the North Carolina State University Libraries reports on successful use of matrix management techniques for the Circulation and Reserves unit of the department. Despite their having fallen out of favor in much of the management literature, matrix management principles are useful for…

  20. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  1. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  2. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  3. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  4. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  5. Quantum phase transitions in matrix product states

    International Nuclear Information System (INIS)

    Zhu Jingmin

    2008-01-01

    We present a new general and much simpler scheme to construct various quantum phase transitions (QPTs) in spin chain systems with matrix product ground states. By use of the scheme we take into account one kind of matrix product state (MPS) QPT and provide a concrete model. We also study the properties of the concrete example and show that a kind of QPT appears, accompanied by the appearance of the discontinuity of the parity absent block physical observable, diverging correlation length only for the parity absent block operator, and other properties which are that the fixed point of the transition point is an isolated intermediate-coupling fixed point of renormalization flow and the entanglement entropy of a half-infinite chain is discontinuous. (authors)

  6. Quantum Phase Transitions in Matrix Product States

    International Nuclear Information System (INIS)

    Jing-Min, Zhu

    2008-01-01

    We present a new general and much simpler scheme to construct various quantum phase transitions (QPTs) in spin chain systems with matrix product ground states. By use of the scheme we take into account one kind of matrix product state (MPS) QPT and provide a concrete model. We also study the properties of the concrete example and show that a kind of QPT appears, accompanied by the appearance of the discontinuity of the parity absent block physical observable, diverging correlation length only for the parity absent block operator, and other properties which are that the fixed point of the transition point is an isolated intermediate-coupling fixed point of renormalization flow and the entanglement entropy of a half-infinite chain is discontinuous

  7. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  8. Hypoxia, leukocytes, and the pulmonary circulation.

    Science.gov (United States)

    Stenmark, Kurt R; Davie, Neil J; Reeves, John T; Frid, Maria G

    2005-02-01

    Data are rapidly accumulating in support of the idea that circulating monocytes and/or mononuclear fibrocytes are recruited to the pulmonary circulation of chronically hypoxic animals and that these cells play an important role in the pulmonary hypertensive process. Hypoxic induction of monocyte chemoattractant protein-1, stromal cell-derived factor-1, vascular endothelial growth factor-A, endothelin-1, and tumor growth factor-beta(1) in pulmonary vessel wall cells, either directly or indirectly via signals from hypoxic lung epithelial cells, may be a critical first step in the recruitment of circulating leukocytes to the pulmonary circulation. In addition, hypoxic stress appears to induce release of increased numbers of monocytic progenitor cells from the bone marrow, and these cells may have upregulated expression of receptors for the chemokines produced by the lung circulation, which thus facilitates their specific recruitment to the pulmonary site. Once present, macrophages/fibrocytes may exert paracrine effects on resident pulmonary vessel wall cells stimulating proliferation, phenotypic modulation, and migration of resident fibroblasts and smooth muscle cells. They may also contribute directly to the remodeling process through increased production of collagen and/or differentiation into myofibroblasts. In addition, they could play a critical role in initiating and/or supporting neovascularization of the pulmonary artery vasa vasorum. The expanded vasa network may then act as a conduit for further delivery of circulating mononuclear cells to the pulmonary arterial wall, creating a feedforward loop of pathological remodeling. Future studies will need to determine the mechanisms that selectively induce leukocyte/fibrocyte recruitment to the lung circulation under hypoxic conditions, their direct role in the remodeling process via production of extracellular matrix and/or differentiation into myofibroblasts, their impact on the phenotype of resident smooth muscle

  9. Direct integration of the S-matrix applied to rigorous diffraction

    International Nuclear Information System (INIS)

    Iff, W; Lindlein, N; Tishchenko, A V

    2014-01-01

    A novel Fourier method for rigorous diffraction computation at periodic structures is presented. The procedure is based on a differential equation for the S-matrix, which allows direct integration of the S-matrix blocks. This results in a new method in Fourier space, which can be considered as a numerically stable and well-parallelizable alternative to the conventional differential method based on T-matrix integration and subsequent conversions from the T-matrices to S-matrix blocks. Integration of the novel differential equation in implicit manner is expounded. The applicability of the new method is shown on the basis of 1D periodic structures. It is clear however, that the new technique can also be applied to arbitrary 2D periodic or periodized structures. The complexity of the new method is O(N 3 ) similar to the conventional differential method with N being the number of diffraction orders. (fast track communication)

  10. Omentin-1 prevents cartilage matrix destruction by regulating matrix metalloproteinases.

    Science.gov (United States)

    Li, Zhigang; Liu, Baoyi; Zhao, Dewei; Wang, BenJie; Liu, Yupeng; Zhang, Yao; Li, Borui; Tian, Fengde

    2017-08-01

    Matrix metalloproteinases (MMPs) play a crucial role in the degradation of the extracellular matrix and pathological progression of osteoarthritis (OA). Omentin-1 is a newly identified anti-inflammatory adipokine. Little information regarding the protective effects of omentin-1 in OA has been reported before. In the current study, our results indicated that omentin-1 suppressed expression of MMP-1, MMP-3, and MMP-13 induced by the proinflammatory cytokine interleukin-1β (IL-1β) at both the mRNA and protein levels in human chondrocytes. Importantly, administration of omentin-1 abolished IL-1β-induced degradation of type II collagen (Col II) and aggrecan, the two major extracellular matrix components in articular cartilage, in a dose-dependent manner. Mechanistically, omentin-1 ameliorated the expression of interferon regulatory factor 1 (IRF-1) by blocking the JAK-2/STAT3 pathway. Our results indicate that omentin-1 may have a potential chondroprotective therapeutic capacity. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  11. Self-Assembly of Block Copolymer Chains To Promote the Dispersion of Nanoparticles in Polymer Nanocomposites

    Science.gov (United States)

    2017-01-01

    In this paper we adopt molecular dynamics simulations to study the amphiphilic AB block copolymer (BCP) mediated nanoparticle (NP) dispersion in polymer nanocomposites (PNCs), with the A-block being compatible with the NPs and the B-block being miscible with the polymer matrix. The effects of the number and components of BCP, as well as the interaction strength between A-block and NPs on the spatial organization of NPs, are explored. We find that the increase of the fraction of the A-block brings different dispersion effect to NPs than that of B-block. We also find that the best dispersion state of the NPs occurs in the case of a moderate interaction strength between the A-block and the NPs. Meanwhile, the stress–strain behavior is probed. Our simulation results verify that adopting BCP is an effective way to adjust the dispersion of NPs in the polymer matrix, further to manipulate the mechanical properties. PMID:28892620

  12. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  13. Coherent Multidecadal Atmospheric and Oceanic Variability in the North Atlantic: Blocking Corresponds with Warm Subpolar Ocean

    Science.gov (United States)

    Hakkinen, Sirpa M.; Rhines, P. B.; Worthen, D. L.

    2012-01-01

    Winters with frequent atmospheric blocking, in a band of latitudes from Greenland to Western Europe, are found to persist over several decades and correspond to a warm North Atlantic Ocean. This is evident in atmospheric reanalysis data, both modern and for the full 20th century. Blocking is approximately in phase with Atlantic multidecadal ocean variability (AMV). Wintertime atmospheric blocking involves a highly distorted jetstream, isolating large regions of air from the westerly circulation. It influences the ocean through windstress-curl and associated air/sea heat flux. While blocking is a relatively high-frequency phenomenon, it is strongly modulated over decadal timescales. The blocked regime (weaker ocean gyres, weaker air-sea heat flux, paradoxically increased transport of warm subtropical waters poleward) contributes to the warm phase of AMV. Atmospheric blocking better describes the early 20thC warming and 1996-2010 warm period than does the NAO index. It has roots in the hemispheric circulation and jet stream dynamics. Subpolar Atlantic variability covaries with distant AMOC fields: both these connections may express the global influence of the subpolar North Atlantic ocean on the global climate system.

  14. Amphiphilic block copolymers for drug delivery.

    Science.gov (United States)

    Adams, Monica L; Lavasanifar, Afsaneh; Kwon, Glen S

    2003-07-01

    Amphiphilic block copolymers (ABCs) have been used extensively in pharmaceutical applications ranging from sustained-release technologies to gene delivery. The utility of ABCs for delivery of therapeutic agents results from their unique chemical composition, which is characterized by a hydrophilic block that is chemically tethered to a hydrophobic block. In aqueous solution, polymeric micelles are formed via the association of ABCs into nanoscopic core/shell structures at or above the critical micelle concentration. Upon micellization, the hydrophobic core regions serve as reservoirs for hydrophobic drugs, which may be loaded by chemical, physical, or electrostatic means, depending on the specific functionalities of the core-forming block and the solubilizate. Although the Pluronics, composed of poly(ethylene oxide)-block-poly(propylene oxide)-block-poly(ethylene oxide), are the most widely studied ABC system, copolymers containing poly(L-amino acid) and poly(ester) hydrophobic blocks have also shown great promise in delivery applications. Because each ABC has unique advantages with respect to drug delivery, it may be possible to choose appropriate block copolymers for specific purposes, such as prolonging circulation time, introduction of targeting moieties, and modification of the drug-release profile. ABCs have been used for numerous pharmaceutical applications including drug solubilization/stabilization, alteration of the pharmacokinetic profile of encapsulated substances, and suppression of multidrug resistance. The purpose of this minireview is to provide a concise, yet detailed, introduction to the use of ABCs and polymeric micelles as delivery agents as well as to highlight current and past work in this area. Copyright 2003 Wiley-Liss, Inc. and the American Pharmacists Association

  15. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  16. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  17. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  18. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  19. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  20. Compressed multi-block local binary pattern for object tracking

    Science.gov (United States)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  1. Morphology-properties relationship on nanocomposite films based on poly(styrene-block-diene-block-styrene copolymers and silver nanoparticles

    Directory of Open Access Journals (Sweden)

    2011-02-01

    Full Text Available A comparative study on the self-assembled nanostructured morphology and the rheological and mechanical properties of four different triblock copolymers, based on poly(styrene-block-diene-block-styrene and poly(styrene-block-diene-block-styrene matrices, and of their respective nanocomposites with 1 wt% silver nanoparticles, is reported in this work. In order to obtain well-dispersed nanoparticles in the block copolymer matrix, dodecanethiol was used as surfactant, showing good affinity with both nanoparticles and the polystyrene phase of the matrices as predicted by the solubility parameters calculated based on Hoftyzer and Van Krevelen theory. The block copolymer with the highest PS content shows the highest tensile modulus and tensile strength, but also the smallest elongation at break. When silver nanoparticles treated with surfactant were added to the block copolymer matrices, each system studied shows higher mechanical properties due to the good dispersion and the good interface of Ag nanoparticles in the matrices. Furthermore, it has been shown that semiempirical models such as Guth and Gold equation and Halpin-Tsai model can be used to predict the tensile modulus of the analyzed nanocomposites.

  2. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  3. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  4. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  5. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum

    International Nuclear Information System (INIS)

    Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H

    2015-01-01

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)

  6. Evidence for radical anion formation during liquid secondary ion mass spectrometry analysis of oligonucleotides and synthetic oligomeric analogues: a deconvolution algorithm for molecular ion region clusters.

    Science.gov (United States)

    Laramée, J A; Arbogast, B; Deinzer, M L

    1989-10-01

    It is shown that one-electron reduction is a common process that occurs in negative ion liquid secondary ion mass spectrometry (LSIMS) of oligonucleotides and synthetic oligonucleosides and that this process is in competition with proton loss. Deconvolution of the molecular anion cluster reveals contributions from (M-2H).-, (M-H)-, M.-, and (M + H)-. A model based on these ionic species gives excellent agreement with the experimental data. A correlation between the concentration of species arising via one-electron reduction [M.- and (M + H)-] and the electron affinity of the matrix has been demonstrated. The relative intensity of M.- is mass-dependent; this is rationalized on the basis of base-stacking. Base sequence ion formation is theorized to arise from M.- radical anion among other possible pathways.

  7. Ocean circulation code on machine connection

    International Nuclear Information System (INIS)

    Vitart, F.

    1993-01-01

    This work is part of a development of a global climate model based on a coupling between an ocean model and an atmosphere model. The objective was to develop this global model on a massively parallel machine (CM2). The author presents the OPA7 code (equations, boundary conditions, equation system resolution) and parallelization on the CM2 machine. CM2 data structure is briefly evoked, and two tests are reported (on a flat bottom basin, and a topography with eight islands). The author then gives an overview of studies aimed at improving the ocean circulation code: use of a new state equation, use of a formulation of surface pressure, use of a new mesh. He reports the study of the use of multi-block domains on CM2 through advection tests, and two-block tests

  8. Thermal stress analysis of HTGR fuel and control rod fuel blocks in the HTGR in-block carbonization and annealing furnace

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; McAfee, W.J.

    1977-01-01

    A new approach that utilizes the equivalent solid plate method has been applied to the thermal stress analysis of HTGR fuel and control rod fuel blocks. Cases were considered where these blocks, loaded with reprocessed HTGR fuel pellets, were being cured at temperatures up to 1800 0 C. A two-dimensional segment of a fuel block cross section including fuel, coolant holes, and graphite matrix was analyzed using the ORNL HEATING3 heat transfer code to determine the temperature-dependent effective thermal conductivity for the perforated region of the block. Using this equivalent conductivity to calculate the temperature distributions through different cross sections of the blocks, two-dimensional thermal-stress analyses were performed through application of the equivalent solid plate method. In this approach, the perforated material is replaced by solid homogeneous material of the same external dimensions but whose material properties have been modified to account for the perforations

  9. Exact solution of corner-modified banded block-Toeplitz eigensystems

    International Nuclear Information System (INIS)

    Cobanera, Emilio; Alase, Abhijeet; Viola, Lorenza; Ortiz, Gerardo

    2017-01-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified . Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz , independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix , whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev. (paper)

  10. Liquor circulation disturbance after subarachnoid haemorrhages - comparative pneumoencephalography and liquor scintigraphic investigations

    International Nuclear Information System (INIS)

    Menzel, J.; Georgi, P.; Krastel, A.; Deutsches Krebsforschungszentrum, Heidelberg

    1976-01-01

    Haemorrhages into the subarachnoid space often lead to instant blocking of the liquor circulation pathways with an acute increase of the intracranial pressure and acute venticular enlargement. These liquor circulation disturbances may be diagnosed by liquor scintiscanning as well as by pneumoencephalography. 165 patients were examined by both methods. The following results were obtained: liquor circulation disturbances after subarachnoid bleeding are frequent, they should be expected in 33% of all cases after spontaneous subarachnoid haemorrhages and in 68% of the cases after traumatic subarachnoid haemorrhages. The most severe form of liquor circulation distrubance may also be diagnosed by liquor scintiscanning as well as by pneumoencephalography. Liquor scintiscanning is the more exact method in cases with transitory ventricular reflux, while lumbar pneumoencephalography, in this series, is the method of choice when it comes to documenting the extent of the hydrocephalus. (GSE) [de

  11. Wolbachia Blocks Currently Circulating Zika Virus Isolates in Brazilian Aedes aegypti Mosquitoes

    OpenAIRE

    Dutra, Heverton Leandro Carneiro; Rocha, Marcele Neves; Dias, Fernando Braga Stehling; Mansur, Simone Brutman; Caragata, Eric Pearce; Moreira, Luciano Andrade

    2016-01-01

    Summary The recent association of Zika virus with cases of microcephaly has sparked a global health crisis and highlighted the need for mechanisms to combat the Zika vector, Aedes aegypti mosquitoes. Wolbachia pipientis, a bacterial endosymbiont of insect, has recently garnered attention as a mechanism for arbovirus control. Here we report that Aedes aegypti harboring Wolbachia are highly resistant to infection with two currently circulating Zika virus isolates from the recent Brazilian epide...

  12. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  13. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  14. Variational optimization algorithms for uniform matrix product states

    Science.gov (United States)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  15. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  16. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Harper, Brett [Institute of Biomedical Studies, Baylor University, Waco, TX 76798 (United States); Neumann, Elizabeth K. [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States); Stow, Sarah M.; May, Jody C.; McLean, John A. [Department of Chemistry, Vanderbilt University, Nashville, TN 37235 (United States); Vanderbilt Institute of Chemical Biology, Nashville, TN 37235 (United States); Vanderbilt Institute for Integrative Biosystems Research and Education, Nashville, TN 37235 (United States); Center for Innovative Technology, Nashville, TN 37235 (United States); Solouki, Touradj, E-mail: Touradj_Solouki@baylor.edu [Department of Chemistry and Biochemistry, Baylor University, Waco, TX 76798 (United States)

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.{sub 8} Å{sup 2}, 295.{sub 1} Å{sup 2}, 296.{sub 8} Å{sup 2}, and 300.{sub 1} Å{sup 2}; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  17. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution

    International Nuclear Information System (INIS)

    Harper, Brett; Neumann, Elizabeth K.; Stow, Sarah M.; May, Jody C.; McLean, John A.; Solouki, Touradj

    2016-01-01

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting “pure” IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810–1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) “shift factors” to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288._8 Å"2, 295._1 Å"2, 296._8 Å"2, and 300._1 Å"2; all four of these CCS values were within 1.5% of independently measured DTIM-MS values.

  18. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  19. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    Science.gov (United States)

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  20. Deconvolution effect of near-fault earthquake ground motions on stochastic dynamic response of tunnel-soil deposit interaction systems

    Directory of Open Access Journals (Sweden)

    K. Hacıefendioğlu

    2012-04-01

    Full Text Available The deconvolution effect of the near-fault earthquake ground motions on the stochastic dynamic response of tunnel-soil deposit interaction systems are investigated by using the finite element method. Two different earthquake input mechanisms are used to consider the deconvolution effects in the analyses: the standard rigid-base input and the deconvolved-base-rock input model. The Bolu tunnel in Turkey is chosen as a numerical example. As near-fault ground motions, 1999 Kocaeli earthquake ground motion is selected. The interface finite elements are used between tunnel and soil deposit. The mean of maximum values of quasi-static, dynamic and total responses obtained from the two input models are compared with each other.

  1. The block Gauss-Seidel method in sound transmission problems

    OpenAIRE

    Poblet-Puig, Jordi; Rodríguez Ferran, Antonio

    2009-01-01

    Sound transmission through partitions can be modelled as an acoustic fluid-elastic structure interaction problem. The block Gauss-Seidel iterative method is used in order to solve the finite element linear system of equations. The blocks are defined in a natural way, respecting the fluid and structural domains. The convergence criterion (spectral radius of iteration matrix smaller than one) is analysed and interpreted in physical terms by means of simple one-dimensional problems. This anal...

  2. How Properties of Kenaf Fibers from Burkina Faso Contribute to the Reinforcement of Earth Blocks

    Science.gov (United States)

    Millogo, Younoussa; Aubert, Jean-Emmanuel; Hamard, Erwan; Morel, Jean-Claude

    2015-01-01

    Physicochemical characteristics of Hibiscus cannabinus (kenaf) fibers from Burkina Faso were studied using X-ray diffraction (XRD), infrared spectroscopy, thermal gravimetric analysis (TGA), chemical analysis and video microscopy. Kenaf fibers (3 cm long) were used to reinforce earth blocks, and the mechanical properties of reinforced blocks, with fiber contents ranging from 0.2 to 0.8 wt%, were investigated. The fibers were mainly composed of cellulose type I (70.4 wt%), hemicelluloses (18.9 wt%) and lignin (3 wt%) and were characterized by high tensile strength (1 ± 0.25 GPa) and Young’s modulus (136 ± 25 GPa), linked to their high cellulose content. The incorporation of short fibers of kenaf reduced the propagation of cracks in the blocks, through the good adherence of fibers to the clay matrix, and therefore improved their mechanical properties. Fiber incorporation was particularly beneficial for the bending strength of earth blocks because it reinforces these blocks after the failure of soil matrix observed for unreinforced blocks. Blocks reinforced with such fibers had a ductile tensile behavior that made them better building materials for masonry structures than unreinforced blocks.

  3. An efficient algorithm for removal of inactive blocks in reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Ertekin, T. (Pennsylvania State Univ., PA (United States))

    1992-02-01

    In the efficient simulation of reservoirs having irregular boundaries one is confronted with two problems: the removal of inactive blocks at the matrix level and the development and application of a variable band-width solver. A simple algorithm is presented that provides effective solutions to these two problems. The algorithm is demonstrated for both the natural ordering and D4 ordering schemes. It can be easily incorporated in existing simulators and results in significant savings in CPU and matrix storage requirements. The removal of the inactive blocks at the matrix level plays a major role in effecting these savings whereas the application of a variable band-width solver plays an enhancing role only. The value of this algorithm lies in the fact that it takes advantage of irregular reservoir boundaries that are invariably encountered in almost all practical applications of reservoir simulation. 11 refs., 3 figs., 3 tabs.

  4. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  5. Optimizing Sparse Matrix-Multiple Vectors Multiplication for Nuclear Configuration Interaction Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-14

    Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.

  6. A fast direct method for block triangular Toeplitz-like with tri-diagonal block systems from time-fractional partial differential equations

    Science.gov (United States)

    Ke, Rihuan; Ng, Michael K.; Sun, Hai-Wei

    2015-12-01

    In this paper, we study the block lower triangular Toeplitz-like with tri-diagonal blocks system which arises from the time-fractional partial differential equation. Existing fast numerical solver (e.g., fast approximate inversion method) cannot handle such linear system as the main diagonal blocks are different. The main contribution of this paper is to propose a fast direct method for solving this linear system, and to illustrate that the proposed method is much faster than the classical block forward substitution method for solving this linear system. Our idea is based on the divide-and-conquer strategy and together with the fast Fourier transforms for calculating Toeplitz matrix-vector multiplication. The complexity needs O (MNlog2 ⁡ M) arithmetic operations, where M is the number of blocks (the number of time steps) in the system and N is the size (number of spatial grid points) of each block. Numerical examples from the finite difference discretization of time-fractional partial differential equations are also given to demonstrate the efficiency of the proposed method.

  7. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    Science.gov (United States)

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  8. Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered. Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.

  9. Irregular conformal block, spectral curve and flow equations

    International Nuclear Information System (INIS)

    Choi, Sang Kwan; Rim, Chaiho; Zhang, Hong

    2016-01-01

    Irregular conformal block is motivated by the Argyres-Douglas type of N=2 super conformal gauge theory. We investigate the classical/NS limit of irregular conformal block using the spectral curve on a Riemann surface with irregular punctures, which is equivalent to the loop equation of irregular matrix model. The spectral curve is reduced to the second order (Virasoro symmetry, SU(2) for the gauge theory) and third order (W_3 symmetry, SU(3)) differential equations of a polynomial with finite degree. The conformal and W symmetry generate the flow equations in the spectral curve and determine the irregular conformal block, hence the partition function of the Argyres-Douglas theory ala AGT conjecture.

  10. Function of Matrix IGF-1 in Coupling Bone Resorption and Formation

    Science.gov (United States)

    Crane, Janet L.; Cao, Xu

    2013-01-01

    Balancing bone resorption and formation is the quintessential component for the prevention of osteoporosis. Signals that determine the recruitment, replication, differentiation, function, and apoptosis of osteoblasts and osteoclasts direct bone remodeling and determine whether bone tissue is gained, lost, or balanced. Therefore understanding the signaling pathways involved in the coupling process will help develop further targets for osteoporosis therapy, by blocking bone resorption or enhancing bone formation in a space and time dependent manner. Insulin-like growth factor type 1 (IGF-1) has long been known to play a role in bone strength. It is one of the most abundant substances in the bone matrix, circulates systemically and is secreted locally, and has a direct relationship with bone mineral density. Recent data has helped further our understanding of the direct role of IGF-1 signaling in coupling bone remodeling which will be discussed in this review. The bone marrow microenvironment plays a critical role in the fate of MSCs and HSCs and thus how IGF-1 interacts with other factors in the microenvironment are equally important. While previous clinical trials with IGF-1 administration have been unsuccessful at enhancing bone formation, advances in basic science studies have provided insight into further mechanisms that should be considered for future trials. Additional basic science studies dissecting the regulation and the function of matrix IGF-1 in modeling and remodeling will continue to provide further insight for future directions for anabolic therapies for osteoporosis. PMID:24068256

  11. Function of matrix IGF-1 in coupling bone resorption and formation.

    Science.gov (United States)

    Crane, Janet L; Cao, Xu

    2014-02-01

    Balancing bone resorption and formation is the quintessential component for the prevention of osteoporosis. Signals that determine the recruitment, replication, differentiation, function, and apoptosis of osteoblasts and osteoclasts direct bone remodeling and determine whether bone tissue is gained, lost, or balanced. Therefore, understanding the signaling pathways involved in the coupling process will help develop further targets for osteoporosis therapy, by blocking bone resorption or enhancing bone formation in a space- and time-dependent manner. Insulin-like growth factor type 1 (IGF-1) has long been known to play a role in bone strength. It is one of the most abundant substances in the bone matrix, circulates systemically and is secreted locally, and has a direct relationship with bone mineral density. Recent data has helped further our understanding of the direct role of IGF-1 signaling in coupling bone remodeling which will be discussed in this review. The bone marrow microenvironment plays a critical role in the fate of mesenchymal stem cells and hematopoietic stem cells and thus how IGF-1 interacts with other factors in the microenvironment are equally important. While previous clinical trials with IGF-1 administration have been unsuccessful at enhancing bone formation, advances in basic science studies have provided insight into further mechanisms that should be considered for future trials. Additional basic science studies dissecting the regulation and the function of matrix IGF-1 in modeling and remodeling will continue to provide further insight for future directions for anabolic therapies for osteoporosis.

  12. Calcul statistique du volume des blocs matriciels d'un gisement fissuré The Statistical Computing of Matrix Block Volume in a Fissured Reservoir

    Directory of Open Access Journals (Sweden)

    Guez F.

    2006-11-01

    Full Text Available La recherche des conditions optimales d'exploitation d'un gisement fissuré repose sur une bonne description de la fissuration. En conséquence il est nécessaire de définir les dimensions et volumes des blocs matriciels en chaque point d'une structure. Or la géométrie du milieu (juxtaposition et formes des blocs est généralement trop complexe pour se prêter au calcul. Aussi, dans une précédente communication, avons-nous dû tourner cette difficulté par un raisonnement sur des moyennes (pendages, azimuts, espacement des fissures qui nous a conduits à un ordre de grandeur des volumes. Cependant un volume moyen ne peut pas rendre compte d'une loi de répartition des volumes des blocs. Or c'est cette répartition qui conditionne le choix d'une ou plusieurs méthodes successives de récupération. Aussi présentons-nous ici une méthode originale de calcul statistique de la loi de distribution des volumes des blocs matriciels, applicable en tout point d'un gisement. La part de gisement concernée par les blocs de volume donné en est déduite. La connaissance générale du phénomène de la fracturation sert de base au modèle. Les observations de subsurface sur la fracturation du gisement en fournissent les données (histogramme d'orientation et d'espacement des fissures.Une application au gisement d'Eschau (Alsace, France est rapportée ici pour illustrer la méthode. The search for optimum production conditions for a fissured reservoir depends on having a good description of the fissure pattern. Hence the sizes and volumes of the matrix blocks must be defined at all points in a structure. However, the geometry of the medium (juxtaposition and shapes of blocks in usually too complex for such computation. This is why, in a previous paper, we got around this problem by reasoning on the bases of averages (clips, azimuths, fissure spacing, and thot led us to an order of magnitude of the volumes. Yet a mean volume cannot be used to explain

  13. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  14. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  15. Factors affecting defective fraction of biso-coated HTGR fuel particles during in-block carbonization

    International Nuclear Information System (INIS)

    Caputo, A.J.; Johnson, D.R.; Bayne, C.K.

    1977-01-01

    The performance of Biso-coated thoria fuel particles during the in-block processing step of HTGR fuel element refabrication was evaluated. The effect of various process variables (heating rate, particle crushing strength, horizontal and/or vertical position in the fuel element blocks, and fuel hole permeability) on pitch coke yield, defective fraction of fuel particles, matrix structure, and matrix porosity was evaluated. Of the variables tested, only heating rate had a significant effect on pitch coke yield while both heating rate and particle crushing strength had a significant effect on defective fraction of fuel particles

  16. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  17. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  18. New Designs in Circulation Areas And Museums the Case of the Quai Branly Museum

    Directory of Open Access Journals (Sweden)

    Nihan CANBAKAL ATAOĞLU

    2016-05-01

    Full Text Available During the Pre-Modern Era of 1970s; new buildings questioning general typologies and offering advances in terms of design and function are started to be built. Architects not only looked for unattempted block structures but also their quest for unattempted block structures were continued for internal places, too and internal implicit setups were designed using ortographic tools like plans and sections. In today’s museums; new and multiple circulation routes are designed; in which visitors do not read books from beginning to end but choose their own paths and walk through the exhibition as if in a labyrinth on their own. These radical perceptional, spatial changes and spatial scenarios are particularly emphasized in museum buildings. These new spatial arrangements in circulation areas are offering new spatial experiences with irregular gaps in sections, regular but non-geometric floor plans, vagueness of the borders, striking colors, patterns and materials, differentiated circulation parts (stairs, moving stairways, elevators, platforms, bridges. In the study; Jean Nouvel’s Quai Branly Museum (2006 which is a recent example of this striking change will be analyzed thorough spatial experiences, observations, syntactic analysis technique and semantic examinations.   

  19. Model-based deconvolution of cell cycle time-series data reveals gene expression details at high resolution.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2009-08-01

    Full Text Available In both prokaryotic and eukaryotic cells, gene expression is regulated across the cell cycle to ensure "just-in-time" assembly of select cellular structures and molecular machines. However, present in all time-series gene expression measurements is variability that arises from both systematic error in the cell synchrony process and variance in the timing of cell division at the level of the single cell. Thus, gene or protein expression data collected from a population of synchronized cells is an inaccurate measure of what occurs in the average single-cell across a cell cycle. Here, we present a general computational method to extract "single-cell"-like information from population-level time-series expression data. This method removes the effects of 1 variance in growth rate and 2 variance in the physiological and developmental state of the cell. Moreover, this method represents an advance in the deconvolution of molecular expression data in its flexibility, minimal assumptions, and the use of a cross-validation analysis to determine the appropriate level of regularization. Applying our deconvolution algorithm to cell cycle gene expression data from the dimorphic bacterium Caulobacter crescentus, we recovered critical features of cell cycle regulation in essential genes, including ctrA and ftsZ, that were obscured in population-based measurements. In doing so, we highlight the problem with using population data alone to decipher cellular regulatory mechanisms and demonstrate how our deconvolution algorithm can be applied to produce a more realistic picture of temporal regulation in a cell.

  20. A new Eulerian-Lagrangian finite element simulator for solute transport in discrete fracture-matrix systems

    Energy Technology Data Exchange (ETDEWEB)

    Birkholzer, J.; Karasaki, K. [Lawrence Berkeley National Lab., CA (United States). Earth Sciences Div.

    1996-07-01

    Fracture network simulators have extensively been used in the past for obtaining a better understanding of flow and transport processes in fractured rock. However, most of these models do not account for fluid or solute exchange between the fractures and the porous matrix, although diffusion into the matrix pores can have a major impact on the spreading of contaminants. In the present paper a new finite element code TRIPOLY is introduced which combines a powerful fracture network simulator with an efficient method to account for the diffusive interaction between the fractures and the adjacent matrix blocks. The fracture network simulator used in TRIPOLY features a mixed Lagrangian-Eulerian solution scheme for the transport in fractures, combined with an adaptive gridding technique to account for sharp concentration fronts. The fracture-matrix interaction is calculated with an efficient method which has been successfully used in the past for dual-porosity models. Discrete fractures and matrix blocks are treated as two different systems, and the interaction is modeled by introducing sink/source terms in both systems. It is assumed that diffusive transport in the matrix can be approximated as a one-dimensional process, perpendicular to the adjacent fracture surfaces. A direct solution scheme is employed to solve the coupled fracture and matrix equations. The newly developed combination of the fracture network simulator and the fracture-matrix interaction module allows for detailed studies of spreading processes in fractured porous rock. The authors present a sample application which demonstrate the codes ability of handling large-scale fracture-matrix systems comprising individual fractures and matrix blocks of arbitrary size and shape.

  1. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  2. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  3. Microtome Sliced Block Copolymers and Nanoporous Polymers as Masks for Nanolithography

    DEFF Research Database (Denmark)

    Shvets, Violetta; Schulte, Lars; Ndoni, Sokol

    2014-01-01

    Introduction. Block copolymers self-assembling properties are commonly used for creation of very fine nanostructures [1]. Goal of our project is to test new methods of the block-copolymer lithography mask preparation: macroscopic pieces of block-copolymers or nanoporous polymers with cross...... PDMS can be chemically etched from the PB matrix by tetrabutylammonium fluoride in tetrahydrofuran and macroscopic nanoporous PB piece is obtained. Both block-copolymer piece and nanoporous polymer piece were sliced with cryomicrotome perpendicular to the axis of cylinder alignment and flakes...... of etching patterns appear only under the certain parts of thick flakes and are not continuous. Although flakes from block copolymer are thinner and more uniform in thickness than flakes from nanoporous polymer, quality of patterns under nanoporous flakes appeared to be better than under block copolymer...

  4. Aqueous flow and transport in analog systems of fractures embedded in permeable matrix

    DEFF Research Database (Denmark)

    Sonnenborg, Torben Obel; Butts, Michael Brian; Jensen, Karsten Høgh

    1999-01-01

    Two-dimensional laboratory investigations of flow and transport in a fractured permeable medium are presented. Matrix blocks of a manufactured consolidated permeable medium were arranged together to create fractures in the spaces between the blocks. Experiments examined flow and transport in four...

  5. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  6. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  7. Analysis of low-pass filters for approximate deconvolution closure modelling in one-dimensional decaying Burgers turbulence

    Science.gov (United States)

    San, O.

    2016-01-01

    The idea of spatial filtering is central in approximate deconvolution large-eddy simulation (AD-LES) of turbulent flows. The need for low-pass filters naturally arises in the approximate deconvolution approach which is based solely on mathematical approximations by employing repeated filtering operators. Two families of low-pass spatial filters are studied in this paper: the Butterworth filters and the Padé filters. With a selection of various filtering parameters, variants of the AD-LES are systematically applied to the decaying Burgers turbulence problem, which is a standard prototype for more complex turbulent flows. Comparing with the direct numerical simulations, it is shown that all forms of the AD-LES approaches predict significantly better results than the under-resolved simulations at the same grid resolution. However, the results highly depend on the selection of the filtering procedure and the filter design. It is concluded that a complete attenuation for the smallest scales is crucial to prevent energy accumulation at the grid cut-off.

  8. Impact of atmospheric blocking events on the decrease of precipitation in the Selenga River basin

    Science.gov (United States)

    Antokhina, O.; Antokhin, P.; Devyatova, E.; Vladimir, M.

    2017-12-01

    The periods of prolonged deficiency of hydropower potential (HP) of Angara cascade hydroelectric plant related to low-inflow in Baikal and Angara basins threaten to energy sector of Siberia. Since 1901 was recorded five such periods. Last period began in 1996 and continues today. This period attracts the special attention, because it is the longest and coincided with the observed climate change. In our previous works we found that the reason of observed decrease of HP is low water content of Selenga River (main river in Baikal Basin). We also found that the variations of Selenga water-content almost totally depend of summer atmospheric precipitation. Most dramatic decrease of summer precipitation observed in July. In turn, precipitation in July depends on location and intensity of atmospheric frontal zone which separates mid-latitude circulation and East Asia monsoon system. Recently occur reduction this frontal zone and decrease of East Asia summer monsoon intensity. We need in the understanding of the reasons leading to these changes. In the presented work we investigate the influence of atmospheric blocking over Asia on the East Asian summer monsoon circulation in the period its maximum (July). Based on the analysis of large number of blocking events we identified the main mechanisms of blocking influence on the monsoon and studied the properties of cyclones formed by the interaction of air masses from mid latitude and tropics. It turned out that the atmospheric blockings play a fundamental role in the formation of the East Asia monsoon moisture transport and in the precipitation anomalies redistribution. In the absence of blockings over Asia East Asian monsoon moisture does not extend to the north, and in the presence of blockings their spatial configuration and localization completely determines the precipitation anomalies configuration in the northern part of East Asia. We also found that the weakening monsoon circulation in East Asia is associated with

  9. Circulating levels of matrix metalloproteinases and tissue inhibitors of metalloproteinases in patients with incisional hernia

    DEFF Research Database (Denmark)

    Henriksen, Nadia A; Sørensen, Lars T; Jorgensen, Lars N

    2013-01-01

    Incisional hernia formation is a common complication to laparotomy and possibly associated with alterations in connective tissue metabolism. Matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinases (TIMPs) are closely involved in the metabolism of the extracellular matrix. Our...

  10. Dynamic photoinduced realignment processes in photoresponsive block copolymer films: effects of the chain length and block copolymer architecture.

    Science.gov (United States)

    Sano, Masami; Shan, Feng; Hara, Mitsuo; Nagano, Shusaku; Shinohara, Yuya; Amemiya, Yoshiyuki; Seki, Takahiro

    2015-08-07

    A series of block copolymers composed of an amorphous poly(butyl methacrylate) (PBMA) block connected with an azobenzene (Az)-containing liquid crystalline (PAz) block were synthesized by changing the chain length and polymer architecture. With these block copolymer films, the dynamic realignment process of microphase separated (MPS) cylinder arrays of PBMA in the PAz matrix induced by irradiation with linearly polarized light was studied by UV-visible absorption spectroscopy, and time-resolved grazing incidence small angle X-ray scattering (GI-SAXS) measurements using a synchrotron beam. Unexpectedly, the change in the chain length hardly affected the realignment rate. In contrast, the architecture of the AB-type diblock or the ABA-type triblock essentially altered the realignment feature. The strongly cooperative motion with an induction period before realignment was characteristic only for the diblock copolymer series, and the LPL-induced alignment change immediately started for triblock copolymers and the PAz homopolymer. Additionally, a marked acceleration in the photoinduced dynamic motions was unveiled in comparison with a thermal randomization process.

  11. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  12. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  13. A New Block Processing Algorithm of LLL for Fast High-dimension Ambiguity Resolution

    Directory of Open Access Journals (Sweden)

    LIU Wanke

    2016-02-01

    Full Text Available Due to high dimension and precision for the ambiguity vector under GNSS observations of multi-frequency and multi-system, a major problem to limit computational efficiency of ambiguity resolution is the longer reduction time when using conventional LLL algorithm. To address this problem, it is proposed a new block processing algorithm of LLL by analyzing the relationship between the reduction time and the dimensions and precision of ambiguity. The new algorithm reduces the reduction time to improve computational efficiency of ambiguity resolution, which is based on block processing ambiguity variance-covariance matrix that decreased the dimensions of single reduction matrix. It is validated that the new algorithm with two groups of measured data. The results show that the computing efficiency of the new algorithm increased by 65.2% and 60.2% respectively compared with that of LLL algorithm when choosing a reasonable number of blocks.

  14. Novel serological neo-epitope markers of extracellular matrix proteins for the detection of portal hypertension

    DEFF Research Database (Denmark)

    Leeming, Diana Julie; Karsdal, M A; Byrjalsen, I

    2013-01-01

    The hepatic venous pressure gradient (HVPG) is an invasive, but important diagnostic and prognostic marker in cirrhosis with portal hypertension (PHT). During cirrhosis, remodelling of fibrotic tissue by matrix metalloproteinases (MMPs) is a permanent process generating small fragments of degrade...... extracellular matrix (ECM) proteins known as neoepitopes, which are then released into the circulation....

  15. The Twist Tensor Nuclear Norm for Video Completion.

    Science.gov (United States)

    Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui

    2017-12-01

    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

  16. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    Science.gov (United States)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  17. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  18. Self-Biased Radiation Hardened Ka-Band Circulators for Size, Weight and Power Restricted Long Range Space Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Ferrite control components including circulators and isolators are fundamental building blocks of Transmit/Receive modules (TRM) utilized in high data rate active...

  19. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  20. A HOS-based blind deconvolution algorithm for the improvement of time resolution of mixed phase low SNR seismic data

    International Nuclear Information System (INIS)

    Hani, Ahmad Fadzil M; Younis, M Shahzad; Halim, M Firdaus M

    2009-01-01

    A blind deconvolution technique using a modified higher order statistics (HOS)-based eigenvector algorithm (EVA) is presented in this paper. The main purpose of the technique is to enable the processing of low SNR short length seismograms. In our study, the seismogram is assumed to be the output of a mixed phase source wavelet (system) driven by a non-Gaussian input signal (due to earth) with additive Gaussian noise. Techniques based on second-order statistics are shown to fail when processing non-minimum phase seismic signals because they only rely on the autocorrelation function of the observed signal. In contrast, existing HOS-based blind deconvolution techniques are suitable in the processing of a non-minimum (mixed) phase system; however, most of them are unable to converge and show poor performance whenever noise dominates the actual signal, especially in the cases where the observed data are limited (few samples). The developed blind equalization technique is primarily based on the EVA for blind equalization, initially to deal with mixed phase non-Gaussian seismic signals. In order to deal with the dominant noise issue and small number of available samples, certain modifications are incorporated into the EVA. For determining the deconvolution filter, one of the modifications is to use more than one higher order cumulant slice in the EVA. This overcomes the possibility of non-convergence due to a low signal-to-noise ratio (SNR) of the observed signal. The other modification conditions the cumulant slice by increasing the power of eigenvalues of the cumulant slice, related to actual signal, and rejects the eigenvalues below the threshold representing the noise. This modification reduces the effect of the availability of a small number of samples and strong additive noise on the cumulant slices. These modifications are found to improve the overall deconvolution performance, with approximately a five-fold reduction in a mean square error (MSE) and a six

  1. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  2. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  3. t matrix of metallic wire structures

    International Nuclear Information System (INIS)

    Zhan, T. R.; Chui, S. T.

    2014-01-01

    To study the electromagnetic resonance and scattering properties of complex structures of which metallic wire structures are constituents within multiple scattering theory, the t matrix of individual structures is needed. We have recently developed a rigorous and numerically efficient equivalent circuit theory in which retardation effects are taken into account for metallic wire structures. Here, we show how the t matrix can be calculated analytically within this theory. We illustrate our method with the example of split ring resonators. The density of states and cross sections for scattering and absorption are calculated, which are shown to be remarkably enhanced at resonant frequencies. The t matrix serves as the basic building block to evaluate the interaction of wire structures within the framework of multiple scattering theory. This will open the door to efficient design and optimization of assembly of wire structures

  4. Age-related collagen turnover of the interstitial matrix and basement membrane: Implications of age- and sex-dependent remodeling of the extracellular matrix

    DEFF Research Database (Denmark)

    Kehlet, Stephanie N.; Willumsen, Nicholas; Armbrecht, Gabriele

    2018-01-01

    The extracellular matrix (ECM) plays a vital role in maintaining normal tissue function. Collagens are major components of the ECM and there is a tight equilibrium between degradation and formation of these proteins ensuring tissue health and homeostasis. As a consequence of tissue turnover, small...... collagen fragments are released into the circulation, which act as important biomarkers in the study of certain tissue-related remodeling factors in health and disease. The aim of this study was to establish an age-related collagen turnover profile of the main collagens of the interstitial matrix (type I...... an increased turnover. In summary, collagen turnover is affected by age and sex with the interstitial matrix and the basement membrane being differently regulated. The observed changes needs to be accounted for when measuring ECM related biomarkers in clinical studies....

  5. Development of GAGG depth-of-interaction (DOI) block detectors based on pulse shape analysis

    International Nuclear Information System (INIS)

    Yamamoto, Seiichi; Kobayashi, Takahiro; Yeol Yeom, Jung; Morishita, Yuki; Sato, Hiroki; Endo, Takanori; Usuki, Yoshiyuki; Kamada, Kei; Yoshikawa, Akira

    2014-01-01

    A depth-of-interaction (DOI) detector is required for developing a high resolution and high sensitivity PET system. Ce-doped Gd 3 Al 2 Ga 3 O 12 (GAGG fast: GAGG-F) is a promising scintillator for PET applications with high light output, no natural radioisotope and suitable light emission wavelength for semiconductor based photodetectors. However, no DOI detector based on pulse shape analysis with GAGG-F has been developed to date, due to the lack of appropriate scintillators of pairing. Recently a new variation of this scintillator with different Al/Ga ratios—Ce-doped Gd 3 Al 2.6 Ga 2.4 O 12 (GAGG slow: GAGG-S), which has slower decay time was developed. The combination of GAGG-F and GAGG-S may allow us to realize high resolution DOI detectors based on pulse shape analysis. We developed and tested two GAGG phoswich DOI block detectors comprised of pixelated GAGG-F and GAGG-S scintillation crystals. One phoswich block detector comprised of 2×2×5 mm pixel that were assembled into a 5×5 matrix. The DOI block was optically coupled to a silicon photomultiplier (Si-PM) array (Hamamatsu MPPC S11064-050P) with a 2-mm thick light guide. The other phoswich block detector comprised of 0.5×0.5×5 mm (GAGG-F) and 0.5×0.5×6 mm 3 (GAGG-S) pixels that were assembled into a 20×20 matrix. The DOI block was also optically coupled to the same Si-PM array with a 2-mm thick light guide. In the block detector of 2-mm crystal pixels (5×5 matrix), the 2-dimensional histogram revealed excellent separation with an average energy resolution of 14.1% for 662-keV gamma photons. The pulse shape spectrum displayed good separation with a peak-to-valley ratio of 8.7. In the block detector that used 0.5-mm crystal pixels (20×20 matrix), the 2-dimensional histogram also showed good separation with energy resolution of 27.5% for the 662-keV gamma photons. The pulse shape spectrum displayed good separation with a peak-to-valley ratio of 6.5. These results indicate that phoswich DOI

  6. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  7. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  8. DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS

    International Nuclear Information System (INIS)

    Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Gibb, Andrew G.; Halpern, Mark; Marsden, Gaelen; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; France, Kevin; Gundersen, Joshua O.; Hughes, David H.; Martin, Peter G.; Olmi, Luca

    2011-01-01

    We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12 CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting

  9. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  10. Block-conjugate-gradient method

    International Nuclear Information System (INIS)

    McCarthy, J.F.

    1989-01-01

    It is shown that by using the block-conjugate-gradient method several, say s, columns of the inverse Kogut-Susskind fermion matrix can be found simultaneously, in less time than it would take to run the standard conjugate-gradient algorithm s times. The method improves in efficiency relative to the standard conjugate-gradient algorithm as the fermion mass is decreased and as the value of the coupling is pushed to its limit before the finite-size effects become important. Thus it is potentially useful for measuring propagators in large lattice-gauge-theory calculations of the particle spectrum

  11. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  12. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  13. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih

    2017-09-14

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  14. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih; Turkiyyah, George; Ltaief, Hatem; Keyes, David E.

    2017-01-01

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  15. Neutrino mass matrix: Inverted hierarchy and CP violation

    International Nuclear Information System (INIS)

    Frigerio, Michele; Smirnov, Alexei Yu.

    2003-01-01

    We reconstruct the neutrino mass matrix in the flavor basis, in the case of an inverted mass hierarchy (ordering), using all available experimental data on neutrino masses and oscillations. We analyze the dependence of the matrix elements m αβ on the CP violating Dirac δ and Majorana ρ and σ phases, for different values of the absolute mass scale. We find that the present data admit various structures of the mass matrix: (i) hierarchical structures with a set of small (zero) elements; (ii) structures with equalities among various groups of elements: e-row and/or μτ-block elements, diagonal and/or off-diagonal elements; (iii) 'democratic' structure. We find the values of phases for which these structures are realized. The mass matrix elements can anticorrelate with flavor: inverted partial or complete flavor alignment is possible. For various structures of the mass matrix we identify the possible underlying symmetry. We find that the mass matrix can be reconstructed completely only in particular cases, provided that the absolute scale of the mass is measured. Generally, the freedom related to the Majorana phase σ will not be removed, thus admitting various types of mass matrix

  16. Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation

    Directory of Open Access Journals (Sweden)

    Kiju Lee

    2018-05-01

    Full Text Available Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach.Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept were also administered.Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously.

  17. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  18. Sortase-Mediated Ligation of Purely Artificial Building Blocks

    Directory of Open Access Journals (Sweden)

    Xiaolin Dai

    2018-02-01

    Full Text Available Sortase A (SrtA from Staphylococcus aureus has been often used for ligating a protein with other natural or synthetic compounds in recent years. Here we show that SrtA-mediated ligation (SML is universally applicable for the linkage of two purely artificial building blocks. Silica nanoparticles (NPs, poly(ethylene glycol and poly(N-isopropyl acrylamide are chosen as synthetic building blocks. As a proof of concept, NP–polymer, NP–NP, and polymer–polymer structures are formed by SrtA catalysis. Therefore, the building blocks are equipped with the recognition sequence needed for SrtA reaction—the conserved peptide LPETG—and a pentaglycine motif. The successful formation of the reaction products is shown by means of transmission electron microscopy (TEM, matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-ToF MS, and dynamic light scattering (DLS. The sortase catalyzed linkage of artificial building blocks sets the stage for the development of a new approach to link synthetic structures in cases where their synthesis by established chemical methods is complicated.

  19. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Clark, M. A. [NVIDIA Corp., Santa Clara; Strelchenko, Alexei [Fermilab; Vaquero, Alejandro [Utah U.; Wagner, Mathias [NVIDIA Corp., Santa Clara; Weinberg, Evan [Boston U.

    2017-10-26

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations. Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.

  20. Hydro-mechanical and gas transport properties of bentonite blocks - role of interfaces

    International Nuclear Information System (INIS)

    Popp, Till; Roehlke, Christopher; Salzer, Klaus; Gruner, Matthias

    2012-01-01

    Document available in extended abstract form only. The long-term safety of the disposal of nuclear waste is an important issue in all countries with a significant nuclear programme. Repositories for the disposal of high-level and long-lived radioactive waste generally rely on a multi-barrier system to isolate the waste from the biosphere. The multi-barrier system typically comprises the natural geological barrier provided by the repository host rock and its surroundings and an engineered barrier system (EBS), i.e. the backfilling and sealing of shafts and galleries to block any preferential path for radioactive contaminants. Because gas will be created in a radioactive waste repository performance assessment requires quantification of the relevancy of various potential pathways. Referring to the sealing plugs it is expected that in addition to the matrix properties of the sealing material conductive discrete interfaces inside the sealing elements itself and to the host rock may act not only as mechanical weakness planes but also as preferential gas pathways (Popp, 2009). For instance despite the assumed self sealing capacity of bentonite inherent existing interfaces may be reopened during gas injection. Our lab investigations are aiming on a comprehensive hydro-mechanical characterization of interfaces in bentonite buffers, i.e. (1) between prefabricated bentonite blocks itself and (2) on mechanical contacts of bentonite blocks and concrete to various host rocks, i.e. granite. We used as reference material pre-compacted bentonite blocks consisting of a sand clay-bentonite mixture but the variety of bentonite-based buffer materials has to be taken in mind. The blocks were manufactured in the frame work of the so-called dam - project 'Sondershausen', i.e. a German research project performed between 1997 and 2002. The blocks have a standard size of (250 x 125 x 62.5) mm. Approximately 500 t of such bentonite blocks have been produced and assembled in underground drift

  1. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  2. Modeling CO2 Storage in Fractured Reservoirs: Fracture-Matrix Interactions of Free-Phase and Dissolved CO2

    Science.gov (United States)

    Oldenburg, C. M.; Zhou, Q.; Birkholzer, J. T.

    2017-12-01

    The injection of supercritical CO2 (scCO2) in fractured reservoirs has been conducted at several storage sites. However, no site-specific dual-continuum modeling for fractured reservoirs has been reported and modeling studies have generally underestimated the fracture-matrix interactions. We developed a conceptual model for enhanced CO2 storage to take into account global scCO2 migration in the fracture continuum, local storage of scCO2 and dissolved CO2 (dsCO2) in the matrix continuum, and driving forces for scCO2 invasion and dsCO2 diffusion from fractures. High-resolution discrete fracture-matrix models were developed for a column of idealized matrix blocks bounded by vertical and horizontal fractures and for a km-scale fractured reservoir. The column-scale simulation results show that equilibrium storage efficiency strongly depends on matrix entry capillary pressure and matrix-matrix connectivity while the time scale to reach equilibrium is sensitive to fracture spacing and matrix flow properties. The reservoir-scale modeling results shows that the preferential migration of scCO2 through fractures is coupled with bulk storage in the rock matrix that in turn retards the fracture scCO2 plume. We also developed unified-form diffusive flux equations to account for dsCO2 storage in brine-filled matrix blocks and found solubility trapping is significant in fractured reservoirs with low-permeability matrix.

  3. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  4. Block recursive LU preconditioners for the thermally coupled incompressible inductionless MHD problem

    Science.gov (United States)

    Badia, Santiago; Martín, Alberto F.; Planas, Ramon

    2014-10-01

    The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 × 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.

  5. Increasing the darkfield contrast-to-noise ratio using a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging.

    Science.gov (United States)

    Weber, Thomas; Pelzer, Georg; Bayer, Florian; Horn, Florian; Rieger, Jens; Ritter, André; Zang, Andrea; Durst, Jürgen; Anton, Gisela; Michel, Thilo

    2013-07-29

    A novel information retrieval algorithm for X-ray grating-based phase-contrast imaging based on the deconvolution of the object and the reference phase stepping curve (PSC) as proposed by Modregger et al. was investigated in this paper. We applied the method for the first time on data obtained with a polychromatic spectrum and compared the results to those, received by applying the commonly used method, based on a Fourier analysis. We confirmed the expectation, that both methods deliver the same results for the absorption and the differential phase image. For the darkfield image, a mean contrast-to-noise ratio (CNR) increase by a factor of 1.17 using the new method was found. Furthermore, the dose saving potential was estimated for the deconvolution method experimentally. It is found, that for the conventional method a dose which is higher by a factor of 1.66 is needed to obtain a similar CNR value compared to the novel method. A further analysis of the data revealed, that the improvement in CNR and dose efficiency is due to the superior background noise properties of the deconvolution method, but at the cost of comparability between measurements at different applied dose values, as the mean value becomes dependent on the photon statistics used.

  6. Simplified Casing Program for Development Wells in Mahu Well Block

    Directory of Open Access Journals (Sweden)

    Lu Zongyu

    2017-01-01

    Full Text Available In the Mahu well block of Junggar basin, the complex formation has many sets of pressure system. Especially, the formation with microcracks in the middle layer is loose and the pressure bearing capacity is low. Lost circulation is prone to occur in this layer. At present, high investment and long drilling period were the main problems in the exploration and development process. The geostress 3D model of Mahu well block was established by means of logging and drilling data. The model provided the three-pressure profiles of Mahu well block for casing program optimization and safety drilling. Each well could be optimized the intermediate casing setting position. The intermediate casing was saved 160 meters long. The total of drilling speed was improved 5 times compared with the past drilling process. Slim hole drilling technology raised ROP 51.96% higher, and the average drilling period is shorten to 24.83 days.

  7. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  8. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  9. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  10. Special matrices of mathematical physics stochastic, circulant and Bell matrices

    CERN Document Server

    Aldrovandi, R

    2001-01-01

    This book expounds three special kinds of matrices that are of physical interest, centering on physical examples. Stochastic matrices describe dynamical systems of many different types, involving (or not) phenomena like transience, dissipation, ergodicity, nonequilibrium, and hypersensitivity to initial conditions. The main characteristic is growth by agglomeration, as in glass formation. Circulants are the building blocks of elementary Fourier analysis and provide a natural gateway to quantum mechanics and noncommutative geometry. Bell polynomials offer closed expressions for many formulas co

  11. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-23

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack

    2016-01-01

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  14. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  15. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  16. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  17. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  18. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  19. Fibroblast Cluster Formation on 3D Collagen Matrices Requires Cell Contraction-Dependent Fibronectin Matrix Organization

    Science.gov (United States)

    da Rocha-Azevedo, Bruno; Ho, Chin-Han; Grinnell, Frederick

    2012-01-01

    Fibroblasts incubated on 3D collagen matrices in serum or lysophosphatidic acid (LPA)-containing medium self-organize into clusters through a mechanism that requires cell contraction. However, in platelet-derived growth factor (PDGF)-containing medium, cells migrate as individuals and do not form clusters even though they constantly encounter each other. Here, we present evidence that a required function of cell contraction in clustering is formation of fibronectin fibrillar matrix. We found that in serum or LPA but not in PDGF or basal medium, cells organized FN (both serum and cellular) into a fibrillar, detergent-insoluble matrix. Cell clusters developed concomitant with FN matrix formation. FN fibrils accumulated beneath cells and along the borders of cell clusters in regions of cell-matrix tension. Blocking Rho kinase or myosin II activity prevented FN matrix assembly and cell clustering. Using siRNA silencing and function-blocking antibodies and peptides, we found that cell clustering and FN matrix assembly required α5β1 integrins and fibronectin. Cells were still able to exert contractile force and compact the collagen matrix under the latter conditions, which showed that contraction was not sufficient for cell clustering to occur. Our findings provide new insights into how procontractile (serum/LPA) and promigratory (PDGF) growth factor environments can differentially regulate FN matrix assembly by fibroblasts interacting with collagen matrices and thereby influence mesenchymal cell morphogenetic behavior under physiologic circumstances such as wound repair, morphogenesis and malignancy. PMID:23117111

  20. Fibroblast cluster formation on 3D collagen matrices requires cell contraction dependent fibronectin matrix organization.

    Science.gov (United States)

    da Rocha-Azevedo, Bruno; Ho, Chin-Han; Grinnell, Frederick

    2013-02-15

    Fibroblasts incubated on 3D collagen matrices in serum or lysophosphatidic acid (LPA)-containing medium self-organize into clusters through a mechanism that requires cell contraction. However, in platelet-derived growth factor (PDGF)-containing medium, cells migrate as individuals and do not form clusters even though they constantly encounter each other. Here, we present evidence that a required function of cell contraction in clustering is formation of fibronectin (FN) fibrillar matrix. We found that in serum or LPA but not in PDGF or basal medium, cells organized FN (both serum and cellular) into a fibrillar, detergent-insoluble matrix. Cell clusters developed concomitant with FN matrix formation. FN fibrils accumulated beneath cells and along the borders of cell clusters in regions of cell-matrix tension. Blocking Rho kinase or myosin II activity prevented FN matrix assembly and cell clustering. Using siRNA silencing and function-blocking antibodies and peptides, we found that cell clustering and FN matrix assembly required α5β1 integrins and fibronectin. Cells were still able to exert contractile force and compact the collagen matrix under the latter conditions, which showed that contraction was not sufficient for cell clustering to occur. Our findings provide new insights into how procontractile (serum/LPA) and promigratory (PDGF) growth factor environments can differentially regulate FN matrix assembly by fibroblasts interacting with collagen matrices and thereby influence mesenchymal cell morphogenetic behavior under physiologic circumstances such as wound repair, morphogenesis and malignancy. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  2. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  3. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  4. Block Copolymer Modified Epoxy Amine System for Reactive Rotational Molding: Structures, Properties and Processability

    Science.gov (United States)

    Lecocq, Eva; Nony, Fabien; Tcharkhtchi, Abbas; Gérard, Jean-François

    2011-05-01

    Poly(styrene-butadiene-methylmethacrylate) (SBM) and poly(methylmethacrylate-butyle-acrylate-methylmethacrylate) (MAM) triblock copolymers have been dissolved in liquid DGEBA epoxy resin which is subsequently polymerized by meta-xylene diamine (MXDA) or Jeffamine EDR-148. A chemorheology study of these formulations by plate-plate rheology and by thermal analysis has allowed to conclude that the addition of these copolymer blocks improve the reactive rotational moulding processability without affecting the processing time. Indeed, it prevents the pooling of the formulation at the bottom of the mould and a too rapid build up of resin viscosity of these thermosetting systems. The morphology of the cured blends examined by scanning electron microscopy (SEM) shows an increase of fracture surface area and thereby a potential increase of the toughness with the modification of epoxy system. Dynamic mechanical spectroscopy (DMA) and opalescence of final material show that the block PMMA, initially miscible, is likely to induce phase separation from the epoxy-amine matrix. Thereby, the poor compatibilisation between the toughener and the matrix has a detrimental effect on the tensile mechanical properties. The compatibilisation has to be increased to improve in synergy the processability and the final properties of these block copolymer modified formulations. First attempts could be by adapting the length and ratio of each block.

  5. Classifying FM Value Positioning by Using a Product-Process Matrix

    DEFF Research Database (Denmark)

    Katchamart, Akarapong

    with the type of facilities processes between FM organizations with their clients. Approach (Theory/Methodology): The paper develops the facilities product - process matrix to allow comparisons of different facilities products with facilities processes and illustrate its degree of value delivering. The building......, characterized by levels of information, knowledge and innovation sharing, and mutual involvement, defines four facilities process types. Positions on the matrix capture the product-process interrelationships in facilities management. Practical Implications: The paper presents propositions of relating...... blocks of matrix are a facilities product structure and a facilities process structure. Results: A facilities product structure, characterized by degrees of facilities product customization, complexity, contingencies involved, defines four facilities product categories. A facilities process structure...

  6. Minocycline attenuates experimental colitis in mice by blocking expression of inducible nitric oxide synthase and matrix metalloproteinases

    International Nuclear Information System (INIS)

    Huang, T.-Y.; Chu, H.-C.; Lin, Y.-L.; Lin, C.-K.; Hsieh, T.-Y.; Chang, W.-K.; Chao, Y.-C.; Liao, C.-L.

    2009-01-01

    In addition to its antimicrobial activity, minocycline exerts anti-inflammatory effects in several disease models. However, whether minocycline affects the pathogenesis of inflammatory bowel disease has not been determined. We investigated the effects of minocycline on experimental colitis and its underlying mechanisms. Acute and chronic colitis were induced in mice by treatment with dextran sulfate sodium (DSS) or trinitrobenzene sulfonic acid (TNBS), and the effect of minocycline on colonic injury was assessed clinically and histologically. Prophylactic and therapeutic treatment of mice with minocycline significantly diminished mortality rate and attenuated the severity of DSS-induced acute colitis. Mechanistically, minocycline administration suppressed inducible nitric oxide synthase (iNOS) expression and nitrotyrosine production, inhibited proinflammatory cytokine expression, repressed the elevated mRNA expression of matrix metalloproteinases (MMPs) 2, 3, 9, and 13, diminished the apoptotic index in colonic tissues, and inhibited nitric oxide production in the serum of mice with DSS-induced acute colitis. In DSS-induced chronic colitis, minocycline treatment also reduced body weight loss, improved colonic histology, and blocked expression of iNOS, proinflammatory cytokines, and MMPs from colonic tissues. Similarly, minocycline could ameliorate the severity of TNBS-induced acute colitis in mice by decreasing mortality rate and inhibiting proinflammatory cytokine expression in colonic tissues. These results demonstrate that minocycline protects mice against DSS- and TNBS-induced colitis, probably via inhibition of iNOS and MMP expression in intestinal tissues. Therefore, minocycline is a potential remedy for human inflammatory bowel diseases.

  7. Reversing resistance to vascular-disrupting agents by blocking late mobilization of circulating endothelial progenitor cells.

    Science.gov (United States)

    Taylor, Melissa; Billiot, Fanny; Marty, Virginie; Rouffiac, Valérie; Cohen, Patrick; Tournay, Elodie; Opolon, Paule; Louache, Fawzia; Vassal, Gilles; Laplace-Builhé, Corinne; Vielh, Philippe; Soria, Jean-Charles; Farace, Françoise

    2012-05-01

    The prevailing concept is that immediate mobilization of bone marrow-derived circulating endothelial progenitor cells (CEP) is a key mechanism mediating tumor resistance to vascular-disrupting agents (VDA). Here, we show that administration of VDA to tumor-bearing mice induces 2 distinct peaks in CEPs: an early, unspecific CEP efflux followed by a late yet more dramatic tumor-specific CEP burst that infiltrates tumors and is recruited to vessels. Combination with antiangiogenic drugs could not disrupt the early peak but completely abrogated the late VDA-induced CEP burst, blunted bone marrow-derived cell recruitment to tumors, and resulted in striking antitumor efficacy, indicating that the late CEP burst might be crucial to tumor recovery after VDA therapy. CEP and circulating endothelial cell kinetics in VDA-treated patients with cancer were remarkably consistent with our preclinical data. These findings expand the current understanding of vasculogenic "rebounds" that may be targeted to improve VDA-based strategies. Our findings suggest that resistance to VDA therapy may be strongly mediated by late, rather than early, tumor-specific recruitment of CEPs, the suppression of which resulted in increased VDA-mediated antitumor efficacy. VDA-based therapy might thus be significantly enhanced by combination strategies targeting late CEP mobilization. © 2012 AACR

  8. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  9. Accelerated extracellular matrix turnover during exacerbations of COPD

    DEFF Research Database (Denmark)

    Sand, Jannie M B; Knox, Alan J; Lange, Peter

    2015-01-01

    progression. Extracellular matrix (ECM) turnover reflects activity in tissues and consequently assessment of ECM turnover may serve as biomarkers of disease activity. We hypothesized that the turnover of lung ECM proteins were altered during exacerbations of COPD. METHODS: 69 patients with COPD hospitalised...... of circulating fragments of structural proteins, which may serve as markers of disease activity. This suggests that patients with COPD have accelerated ECM turnover during exacerbations which may be related to disease progression....

  10. Quantum Correlation in Matrix Product States of One-Dimensional Spin Chains

    International Nuclear Information System (INIS)

    Zhu Jing-Min

    2015-01-01

    For our proposed composite parity-conserved matrix product state (MPS), if only a spin block length is larger than 1, any two such spin blocks have correlation including classical correlation and quantum correlation. Both the total correlation and the classical correlation become larger than that in any subcomponent; while the quantum correlations of the two nearest-neighbor spin blocks and the two next-nearest-neighbor spin blocks become smaller and for other conditions the quantum correlation becomes larger, i.e., the increase or the production of the long-range quantum correlation is at the cost of reducing the short-range quantum correlation, which deserves to be investigated in the future; and the ration of the quantum correlation to the total correlation monotonically decreases to a steady value as the spacing spin length increasing. (paper)

  11. Entanglement entropy of two disjoint blocks in XY chains

    International Nuclear Information System (INIS)

    Fagotti, Maurizio; Calabrese, Pasquale

    2010-01-01

    We study the Rényi entanglement entropies of two disjoint intervals in XY chains. We exploit the exact solution of the model in terms of free Majorana fermions and we show how to construct the reduced density matrix in the spin variables by taking the Jordan–Wigner string between the two blocks properly into account. From this we can evaluate any Rényi entropy of finite integer order. We study in detail critical XX and Ising chains and we show that the asymptotic results for large blocks agree with recent conformal field theory predictions if corrections to the scaling are included in the analysis correctly. We also report results for the gapped phase and after a quantum quench

  12. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  13. A Single Sphingomyelin Species Promotes Exosomal Release of Endoglin into the Maternal Circulation in Preeclampsia.

    Science.gov (United States)

    Ermini, Leonardo; Ausman, Jonathan; Melland-Smith, Megan; Yeganeh, Behzad; Rolfo, Alessandro; Litvack, Michael L; Todros, Tullia; Letarte, Michelle; Post, Martin; Caniggia, Isabella

    2017-09-22

    Preeclampsia (PE), an hypertensive disorder of pregnancy, exhibits increased circulating levels of a short form of the auxillary TGF-beta (TGFB) receptor endoglin (sENG). Until now, its release and functionality in PE remains poorly understood. Here we show that ENG selectively interacts with sphingomyelin(SM)-18:0 which promotes its clustering with metalloproteinase 14 (MMP14) in SM-18:0 enriched lipid rafts of the apical syncytial membranes from PE placenta where ENG is cleaved by MMP14 into sENG. The SM-18:0 enriched lipid rafts also contain type 1 and 2 TGFB receptors (TGFBR1 and TGFBR2), but not soluble fms-like tyrosine kinase 1 (sFLT1), another protein secreted in excess in the circulation of women with PE. The truncated ENG is then released into the maternal circulation via SM-18:0 enriched exosomes together with TGFBR1 and 2. Such an exosomal TGFB receptor complex could be functionally active and block the vascular effects of TGFB in the circulation of PE women.

  14. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  15. Partial transpose of two disjoint blocks in XY spin chains

    International Nuclear Information System (INIS)

    Coser, Andrea; Tonni, Erik; Calabrese, Pasquale

    2015-01-01

    We consider the partial transpose of the spin reduced density matrix of two disjoint blocks in spin chains admitting a representation in terms of free fermions, such as XY chains. We exploit the solution of the model in terms of Majorana fermions and show that such partial transpose in the spin variables is a linear combination of four Gaussian fermionic operators. This representation allows to explicitly construct and evaluate the integer moments of the partial transpose. We numerically study critical XX and Ising chains and we show that the asymptotic results for large blocks agree with conformal field theory predictions if corrections to the scaling are properly taken into account. (paper)

  16. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  17. Seasonal overturning circulation in the Red Sea: 2. Winter circulation

    KAUST Repository

    Yao, Fengchao; Hoteit, Ibrahim; Pratt, Lawrence J.; Bower, Amy S.; Kö hl, Armin; Gopalakrishnan, Ganesh; Rivas, David

    2014-01-01

    The shallow winter overturning circulation in the Red Sea is studied using a 50 year high-resolution MITgcm (MIT general circulation model) simulation with realistic atmospheric forcing. The overturning circulation for a typical year, represented

  18. EMMPRIN-Mediated Induction of Uterine and Vascular Matrix Metalloproteinases during Pregnancy and in Response to Estrogen and Progesterone

    OpenAIRE

    Dang, Yiping; Li, Wei; Tran, Victoria; Khalil, Raouf A.

    2013-01-01

    Pregnancy is associated with uteroplacental and vascular remodeling in order to adapt for the growing fetus and the hemodynamic changes in the maternal circulation. We have previously shown upregulation of uterine matrix metalloproteinases (MMPs) during pregnancy. Whether pregnancy-associated changes in MMPs are localized to the uterus or are generalized in feto-placental and maternal circulation is unclear. Also, the mechanisms causing the changes in uteroplacental and vascular MMPs during p...

  19. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  20. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  1. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  2. Temperature control characteristics analysis of lead-cooled fast reactor with natural circulation

    International Nuclear Information System (INIS)

    Yang, Minghan; Song, Yong; Wang, Jianye; Xu, Peng; Zhang, Guangyu

    2016-01-01

    Highlights: • The LFR temperature control system are analyzed with frequency domain method. • The temperature control compensator is designed according to the frequency analysis. • Dynamic simulation is performed by SIMULINK and RELAP5-HD. - Abstract: Lead-cooled Fast Reactor (LFR) with natural circulation in primary system is among the highlights in advance nuclear reactor research, due to its great superiority in reactor safety and reliability. In this work, a transfer function matrix describing coolant temperature dynamic process, obtained by Laplace transform of the one-dimensional system dynamic model is developed in order to investigate the temperature control characteristics of LFR. Based on the transfer function matrix, a close-loop coolant temperature control system without compensator is built. The frequency domain analysis indicates that the stability and steady-state of the temperature control system needs to be improved. Accordingly, a temperature compensator based on Proportion–Integration and feed-forward is designed. The dynamic simulation of the whole system with the temperature compensator for core power step change is performed with SIMULINK and RELAP5-HD. The result shows that the temperature compensator can provide superior coolant temperature control capabilities in LFR with natural circulation due to the efficiency of the frequency domain analysis method.

  3. A Dual Active Bridge Converter with an Extended High-Efficiency Range by DC Blocking Capacitor Voltage Control

    DEFF Research Database (Denmark)

    Qin, Zian; Shen, Yanfeng; Loh, Poh Chiang

    2018-01-01

    of hard switching and high circulating power. Thus, a new modulation scheme has been proposed, whose main idea is to introduce a voltage offset across the dc blocking capacitor connected in series with the transformer. Operational principle of the proposed modulation has been introduced, before analyzing...

  4. The extracellular matrix - the under-recognized element in lung disease?

    NARCIS (Netherlands)

    Burgess, Janette K.; Mauad, Thais; Tjin, Gavin; Karlsson, Jenny C.; Westergren-Thorsson, Gunilla

    2016-01-01

    The lung is composed of airways and lung parenchyma, and the extracellular matrix (ECM) contains the main building blocks of both components. The ECM provides physical support and stability to the lung, and as such it has in the past been regarded as an inert structure. More recent research has

  5. Implementation of IMDCT Block of an MP3 Decoder through Optimization on the DCT Matrix

    Directory of Open Access Journals (Sweden)

    M. Galabov

    2004-12-01

    Full Text Available The paper describes an attempt to create an efficient dedicatedMP3-decoder, according to the MPEG-1 Layer III standard. A new methodof Inverse Modified Discrete Cosine Transform by optimization on theDiscrete Cosine Transform (DCT matrix is proposed and an assemblerprogram for Digital Signal Processor is developed. In addition, aprogram to calculate DCT using Lee's algorithm for any matrix of thesize 2M is created. The experimental results have proven that thedecoder is able to stream and decode MP3 in real time.

  6. Ionic Liquids As Self-Assembly Guide for the Formation of Nanostructured Block Copolymer Membranes

    KAUST Repository

    Madhavan, Poornima

    2015-04-30

    Nanostructured block copolymer membranes were manufactured by water induced phase inversion, using ionic liquids (ILs) as cosolvents. The effect of ionic liquids on the morphology was investigated, by using polystyrene-b-poly(4-vinyl pyridine) (PS-b-PV4P) diblock as membrane copolymer matrix and imidazolium and pyridinium based ILs. The effect of IL concentration and chemical composition was evident with particular interaction with P4VP blocks. The order of block copolymer/ILs solutions previous to the membrane casting was confirmed by cryo scanning electron microscopy and the morphologies of the manufactured nanostructured membranes were characterized by transmission and scanning electron microscopy. Non-protic ionic liquids facilitate the formation of hexagonal nanoporous block copolymer structure, while protic ILs led to a lamella-structured membrane. The rheology of the IL/block copolymer solutions was investigated, evaluating the storage and loss moduli. Most membranes prepared with ionic liquid had higher water flux than pure block copolymer membranes without additives.

  7. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  8. Blocking temperature distribution in implanted Co-Ni nanoparticles obtained by magneto-optical measurements

    Energy Technology Data Exchange (ETDEWEB)

    D' Orazio, F.; Lucari, F. E-mail: franco.lucari@aquila.infn.it; Melchiorri, M.; Julian Fernandez, C. de; Mattei, G.; Mazzoldi, P.; Sangregorio, C.; Gatteschi, D.; Fiorani, D

    2003-05-01

    Three samples of Co-Ni alloy nanoparticles with different compositions were prepared by sequential ion implantation in silica slides. Transmission electron microscopy (TEM) showed the presence of spherical nanoparticles dispersed in the matrix. Magneto-optical Kerr effect analysis identified two magnetic components attributed to superparamagnetic particles in unblocked and blocked states, respectively. Magnetic field loops were measured as a function of temperature. Blocking temperature distributions were obtained; and their comparison with the size distributions derived from TEM provided the average magnetic anisotropy of the particles.

  9. Blocking temperature distribution in implanted Co-Ni nanoparticles obtained by magneto-optical measurements

    International Nuclear Information System (INIS)

    D'Orazio, F.; Lucari, F.; Melchiorri, M.; Julian Fernandez, C. de; Mattei, G.; Mazzoldi, P.; Sangregorio, C.; Gatteschi, D.; Fiorani, D.

    2003-01-01

    Three samples of Co-Ni alloy nanoparticles with different compositions were prepared by sequential ion implantation in silica slides. Transmission electron microscopy (TEM) showed the presence of spherical nanoparticles dispersed in the matrix. Magneto-optical Kerr effect analysis identified two magnetic components attributed to superparamagnetic particles in unblocked and blocked states, respectively. Magnetic field loops were measured as a function of temperature. Blocking temperature distributions were obtained; and their comparison with the size distributions derived from TEM provided the average magnetic anisotropy of the particles

  10. Physical-morphological and chemical changes leading to an increase in adhesion between plasma treated polyester fibres and a rubber matrix

    International Nuclear Information System (INIS)

    Krump, H.; Hudec, I.; Jasso, M.; Dayss, E.; Luyt, A.S.

    2006-01-01

    The effects of plasma treatment, used to increase adhesion strength between poly(ethylene terephtalate) (PET) fibres and a rubber matrix, were investigated and compared. Morphological changes as a result of atmospheric plasma treatment were observed using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Wettability analysis using a surface energy evaluation system (SEE system) suggested that the plasma treated fibre was more wetting towards a polar liquid. When treated, these fibres showed a new lamellar crystallization, as shown by a new melting peak using differential scanning calorimetry (DSC). X-ray photoelectron spectroscopy (XPS) has been used to study the chemical effect of inert (argon), active and reactive (nitrogen and oxygen) microwave-plasma treatments of a PET surface. Reactive oxygen plasma treatment by a de-convolution method shows new chemical species that drastically alter the chemical reactivity of the PET surface. These studies have also shown that the surface population of chemical species formed after microwave-plasma treatment is dependent on the plasma gas. All these changes cause better adhesion strength of the PET fibres to the rubber matrix

  11. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  12. Matrix metalloproteinase 2 and membrane type 1 matrix metalloproteinase co-regulate axonal outgrowth of mouse retinal ganglion cells

    DEFF Research Database (Denmark)

    Gaublomme, Djoere; Buyens, Tom; De Groef, Lies

    2014-01-01

    regenerative therapies, an improved understanding of axonal outgrowth and the various molecules influencing it, is highly needed. Matrix metalloproteinases (MMPs) constitute a family of zinc-dependent proteases that were sporadically reported to influence axon outgrowth. Using an ex vivo retinal explant model......, but not MMP-9, are involved in this process. Furthermore, administration of a novel antibody to MT1-MMP that selectively blocks pro-MMP-2 activation revealed a functional co-involvement of these proteinases in determining RGC axon outgrowth. Subsequent immunostainings showed expression of both MMP-2 and MT1...... nervous system is lacking in adult mammals, thereby impeding recovery from injury to the nervous system. Matrix metalloproteinases (MMPs) constitute a family of zinc-dependent proteases that were sporadically reported to influence axon outgrowth. Inhibition of specific MMPs reduced neurite outgrowth from...

  13. The deconvolution of Doppler-broadened positron annihilation measurements using fast Fourier transforms and power spectral analysis

    International Nuclear Information System (INIS)

    Schaffer, J.P.; Shaughnessy, E.J.; Jones, P.L.

    1984-01-01

    A deconvolution procedure which corrects Doppler-broadened positron annihilation spectra for instrument resolution is described. The method employs fast Fourier transforms, is model independent, and does not require iteration. The mathematical difficulties associated with the incorrectly posed first order Fredholm integral equation are overcome by using power spectral analysis to select a limited number of low frequency Fourier coefficients. The FFT/power spectrum method is then demonstrated for an irradiated high purity single crystal sapphire sample. (orig.)

  14. Matrix-comparative genomic hybridization from multicenter formalin-fixed paraffin-embedded colorectal cancer tissue blocks

    Directory of Open Access Journals (Sweden)

    Köhne Claus-Henning

    2007-04-01

    Full Text Available Abstract Background The identification of genomic signatures of colorectal cancer for risk stratification requires the study of large series of cancer patients with an extensive clinical follow-up. Multicentric clinical studies represent an ideal source of well documented archived material for this type of analyses. Methods To verify if this material is technically suitable to perform matrix-CGH, we performed a pilot study using macrodissected 29 formalin-fixed, paraffin-embedded tissue samples collected within the framework of the EORTC-GI/PETACC-2 trial for colorectal cancer. The scientific aim was to identify prognostic genomic signatures differentiating locally restricted (UICC stages II-III from systemically advanced (UICC stage IV colorectal tumours. Results The majority of archived tissue samples collected in the different centers was suitable to perform matrix-CGH. 5/7 advanced tumours displayed 13q-gain and 18q-loss. In locally restricted tumours, only 6/12 tumours showed a gain on 13q and 7/12 tumours showed a loss on 18q. Interphase-FISH and high-resolution array-mapping of the gain on 13q confirmed the validity of the array-data and narrowed the chromosomal interval containing potential oncogenes. Conclusion Archival, paraffin-embedded tissue samples collected in multicentric clinical trials are suitable for matrix-CGH analyses and allow the identification of prognostic signatures and aberrations harbouring potential new oncogenes.

  15. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  16. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  17. Influence of additional heat exchanger block on directional solidification system for growing multi-crystalline silicon ingot - A simulation investigation

    Science.gov (United States)

    Nagarajan, S. G.; Srinivasan, M.; Aravinth, K.; Ramasamy, P.

    2018-04-01

    Transient simulation has been carried out for analyzing the heat transfer properties of Directional Solidification (DS) furnace. The simulation results revealed that the additional heat exchanger block under the bottom insulation on the DS furnace has enhanced the control of solidification of the silicon melt. Controlled Heat extraction rate during the solidification of silicon melt is requisite for growing good quality ingots which has been achieved by the additional heat exchanger block. As an additional heat exchanger block, the water circulating plate has been placed under the bottom insulation. The heat flux analysis of DS system and the temperature distribution studies of grown ingot confirm that the established additional heat exchanger block on the DS system gives additional benefit to the mc-Si ingot.

  18. Multi-kernel deconvolution for contrast improvement in a full field imaging system with engineered PSFs using conical diffraction

    Science.gov (United States)

    Enguita, Jose M.; Álvarez, Ignacio; González, Rafael C.; Cancelas, Jose A.

    2018-01-01

    The problem of restoration of a high-resolution image from several degraded versions of the same scene (deconvolution) has been receiving attention in the last years in fields such as optics and computer vision. Deconvolution methods are usually based on sets of images taken with small (sub-pixel) displacements or slightly different focus. Techniques based on sets of images obtained with different point-spread-functions (PSFs) engineered by an optical system are less popular and mostly restricted to microscopic systems, where a spot of light is projected onto the sample under investigation, which is then scanned point-by-point. In this paper, we use the effect of conical diffraction to shape the PSFs in a full-field macroscopic imaging system. We describe a series of simulations and real experiments that help to evaluate the possibilities of the system, showing the enhancement in image contrast even at frequencies that are strongly filtered by the lens transfer function or when sampling near the Nyquist frequency. Although results are preliminary and there is room to optimize the prototype, the idea shows promise to overcome the limitations of the image sensor technology in many fields, such as forensics, medical, satellite, or scientific imaging.

  19. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Theoretical research for natural circulation operational characteristic of ship nuclear machinery under ocean conditions

    Energy Technology Data Exchange (ETDEWEB)

    Yan Binghuo [Department of Nuclear Science and Engineering, Naval University of Engineering, Wuhan 430033 (China)], E-mail: yanbh1986@163.com; Yu Lei [Department of Nuclear Science and Engineering, Naval University of Engineering, Wuhan 430033 (China)], E-mail: yulei301@163.com

    2009-06-15

    Based on the two-phase drift flux model and the multi-pressure nodes matrix solving method, natural circulation thermal hydraulic analysis models for the Nuclear Machinery (NM) under ocean conditions are developed. The neutron physical activities and the responses of the reactivity control systems are described by the two-group, 3-dimensional space and time dependent neutron kinetics model. Reactivity feedback is calculated by coupling the neutron physics and thermal hydraulic codes, and is tested by comparison with experiments. Using the models developed, the natural circulation operating characteristics of NM in rolling and pitching motions and the transitions between forced circulation (FC) to natural circulation (NC) are analyzed. The results show that the influence of the rolling motion increases as the rolling amplitude is increased, and as the rolling period becomes shorter. The results also show that for this NM, with the same rolling period and rolling angle, the influence of pitching motion on natural circulation is greater than that of rolling motion. Furthermore, the oscillation period for pitching motion is the same as the pitching period, while the oscillation period for rolling is one half of the rolling period. In the ocean environment, excessive flow oscillation of the natural circulation may cause the control rods to respond so frequently that the NM would not be able to realize the transition from the FC to NC steadily. However, the influence of ocean environment on the transition from NC to FC is limited.

  1. Controlled specific placement of nanoparticles into microdomains of block copolymer thin films

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Joonwon, E-mail: joonwonbae@gmail.com [Department of Applied Chemistry, Dongduk Women' s University, Seoul 136-714 (Korea, Republic of); Kim, Jungwook [Department of Chemical and Biomolecular Engineering, Sogang University, Seoul 121-742 (Korea, Republic of); Park, Jongnam, E-mail: jnpark@unist.ac.kr [Interdisciplinary School of Green Energy, Ulsan National Institute of Science and Technology (UNIST), Ulsan 689-798 (Korea, Republic of)

    2014-07-01

    Conceptually attractive hybrid materials composed of nanoparticles and elegant block copolymers have become important for diverse applications. In this work, controlled specific placement of nanoparticles such as gold (Au) and titania (TiO{sub 2}) into microphase separated domains in poly(styrene)-b-poly(2-vinylpyridine) (PS-b-P2VP) block copolymer thin films was demonstrated. The effect of nanoparticle surface functionality on the spatial location of particles inside polymer film was observed by transmission electron microscopy. It was revealed that the location of nanoparticles was highly dependent on the surface ligand property of nanoparticle. In addition, the microphase separation behavior of thin block copolymer film was also affected by the nanoparticle surface functional groups. This study might provide a way to understand the properties and behaviors of numerous block copolymer/nanoparticle hybrid systems. - Highlights: • Controlled location of nanoparticles in the block copolymer matrix • Tailoring surface functionality of metal nanocrystals • Fabrication of homogeneous nanocomposites using organic inorganic components • Possibility for the preparation of nanohybrids.

  2. Controlled specific placement of nanoparticles into microdomains of block copolymer thin films

    International Nuclear Information System (INIS)

    Bae, Joonwon; Kim, Jungwook; Park, Jongnam

    2014-01-01

    Conceptually attractive hybrid materials composed of nanoparticles and elegant block copolymers have become important for diverse applications. In this work, controlled specific placement of nanoparticles such as gold (Au) and titania (TiO 2 ) into microphase separated domains in poly(styrene)-b-poly(2-vinylpyridine) (PS-b-P2VP) block copolymer thin films was demonstrated. The effect of nanoparticle surface functionality on the spatial location of particles inside polymer film was observed by transmission electron microscopy. It was revealed that the location of nanoparticles was highly dependent on the surface ligand property of nanoparticle. In addition, the microphase separation behavior of thin block copolymer film was also affected by the nanoparticle surface functional groups. This study might provide a way to understand the properties and behaviors of numerous block copolymer/nanoparticle hybrid systems. - Highlights: • Controlled location of nanoparticles in the block copolymer matrix • Tailoring surface functionality of metal nanocrystals • Fabrication of homogeneous nanocomposites using organic inorganic components • Possibility for the preparation of nanohybrids

  3. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  4. Morphological studies on block copolymer modified PA 6 blends

    Energy Technology Data Exchange (ETDEWEB)

    Poindl, M., E-mail: marcus.poindl@ikt.uni-stuttgart.de, E-mail: christian.bonten@ikt.uni-stuttgart.de; Bonten, C., E-mail: marcus.poindl@ikt.uni-stuttgart.de, E-mail: christian.bonten@ikt.uni-stuttgart.de [Institut für Kunststofftechnik, University of Stuttgart (Germany)

    2014-05-15

    Recent studies show that compounding polyamide 6 (PA 6) with a PA 6 polyether block copolymers made by reaction injection molding (RIM) or continuous anionic polymerization in a reactive extrusion process (REX) result in blends with high impact strength and high stiffness compared to conventional rubber blends. In this paper, different high impact PA 6 blends were prepared using a twin screw extruder. The different impact modifiers were an ethylene propylene copolymer, a PA PA 6 polyether block copolymer made by reaction injection molding and one made by reactive extrusion. To ensure good particle matrix bonding, the ethylene propylene copolymer was grafted with maleic anhydride (EPR-g-MA). Due to the molecular structure of the two block copolymers, a coupling agent was not necessary. The block copolymers are semi-crystalline and partially cross-linked in contrast to commonly used amorphous rubbers which are usually uncured. The combination of different analysis methods like atomic force microscopy (AFM), transmission electron microscopy (TEM) and scanning electron microscopy (SEM) gave a detailed view in the structure of the blends. Due to the partial cross-linking, the particles of the block copolymers in the blends are not spherical like the ones of ethylene propylene copolymer. The differences in molecular structure, miscibility and grafting of the impact modifiers result in different mechanical properties and different blend morphologies.

  5. [Nuclear matrix organization of the chromocenters in cultured murine fibroblasts].

    Science.gov (United States)

    Sheval', E V; Poliakov, V Iu

    2010-01-01

    In the current work, the structural organization of nuclear matrix of pericentromeric heterochromatin blocks (chromocenters) inside cultured murine fibroblasts was investigated. After 2 M NaCl extraction without DNase I treatment, chromocenters were extremely swelled, and it was impossible to detect them using conventional electron microscopy. Using immunogolding with anti-topoisomerase IIalpha antibody, we demonstrated that residual chromocenters were subdivided into numerous discrete aggregates. After 2 M NaCl extraction with DNase I treatment, the residual chromocenters appeared as a dense meshwork of thin fibers, and using this feature, the residual chromocenters were easily distinguished from the rest of nuclear matrix. After extraction with dextran sulfate and heparin, the chromocenters were decondensed, and chromatin complexes having rosette organization (central core from which numerous DNA fibers radiated) were seen. Probably, the appearance of these rosettes was a consequence of incomplete chromatin extraction. Thus, the nuclear matrix of pericentromeric chromosome regions in cultured murine fibroblasts differs morphologically from the rest of nuclear matrix.

  6. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  7. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    Science.gov (United States)

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    Science.gov (United States)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of

  9. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  10. Self-organization processes in polysiloxane block copolymers, initiated by modifying fullerene additives

    Science.gov (United States)

    Voznyakovskii, A. P.; Kudoyarova, V. Kh.; Kudoyarov, M. F.; Patrova, M. Ya.

    2017-08-01

    Thin films of a polyblock polysiloxane copolymer and their composites with a modifying fullerene C60 additive are studied by atomic force microscopy, Rutherford backscattering, and neutron scattering. The data of atomic force microscopy show that with the addition of fullerene to the bulk of the polymer matrix, the initial relief of the film surface is leveled more, the larger the additive. This trend is associated with the processes of self-organization of rigid block sequences, which are initiated by the field effect of the surface of fullerene aggregates and lead to an increase in the number of their domains in the bulk of the polymer matrix. The data of Rutherford backscattering and neutron scattering indicate the formation of additional structures with a radius of 60 nm only in films containing fullerene, and their fraction increases with increasing fullerene concentration. A comparative analysis of the data of these methods has shown that such structures are, namely, the domains of a rigid block and are not formed by individual fullerene aggregates. The interrelation of the structure and mechanical properties of polymer films is considered.

  11. Hybrid titanium dioxide/PS-b-PEO block copolymer nanocomposites based on sol-gel synthesis

    International Nuclear Information System (INIS)

    Gutierrez, J; Tercjak, A; Garcia, I; Peponi, L; Mondragon, I

    2008-01-01

    The poly(styrene)-b-poly(ethylene oxide) (SEO) amphiphilic block copolymer, with two different molecular weights, has been used as a structure directing agent for generating nanocomposites of TiO 2 /SEO via the sol-gel process. SEO amphiphilic block copolymers are designed with a hydrophilic PEO-block which can interact with inorganic molecules, as well as a hydrophobic PS-block which builds the matrix. The addition of different amounts of sol-gel provokes strong variations in the self-assembled morphology of TiO 2 /SEO nanocomposites with respect to the neat block copolymer. As confirmed by atomic force microscopy (AFM), TiO 2 /PEO-block micelles get closer, forming well-ordered spherical domains, in which TiO 2 nanoparticles constitute the core surrounded by a corona of PEO-blocks. Moreover, for 20 vol% sol-gel the generated morphology changes to a hexagonally ordered structure for both block copolymers. The cylindrical structure of these nanocomposites has been confirmed by the two-dimensional Fourier transform power spectrum of the corresponding AFM height images. Affinity between titanium dioxide precursor and PEO-block of SEO allows us to generate hybrid inorganic/organic nanocomposites, which retain the optical properties of TiO 2 , as evaluated by UV-vis spectroscopy

  12. Chitosan microspheres with an extracellular matrix-mimicking nanofibrous structure as cell-carrier building blocks for bottom-up cartilage tissue engineering

    Science.gov (United States)

    Zhou, Yong; Gao, Huai-Ling; Shen, Li-Li; Pan, Zhao; Mao, Li-Bo; Wu, Tao; He, Jia-Cai; Zou, Duo-Hong; Zhang, Zhi-Yuan; Yu, Shu-Hong

    2015-12-01

    Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation. Recently, as a valuable alternative, a bottom-up TE approach utilizing cell-loaded micrometer-scale modular components as building blocks to reconstruct a new tissue in vitro or in vivo has been proved to demonstrate a number of desirable advantages compared with the traditional bulk scaffold based top-down TE approach. Nevertheless, micro-components with an ECM-mimicking nanofibrous structure are still very scarce and highly desirable. Chitosan (CS), an accessible natural polymer, has demonstrated appealing intrinsic properties and promising application potential for TE, especially the cartilage tissue regeneration. According to this background, we report here the fabrication of chitosan microspheres with an ECM-mimicking nanofibrous structure for the first time based on a physical gelation process. By combining this physical fabrication procedure with microfluidic technology, uniform CS microspheres (CMS) with controlled nanofibrous microstructure and tunable sizes can be facilely obtained. Especially, no potentially toxic or denaturizing chemical crosslinking agent was introduced into the products. Notably, in vitro chondrocyte culture tests revealed that enhanced cell attachment and proliferation were realized, and a macroscopic 3D geometrically shaped cartilage-like composite can be easily constructed with the nanofibrous CMS (NCMS) and chondrocytes, which demonstrate significant application potential of NCMS as the bottom-up cell-carrier components for cartilage tissue engineering.Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation

  13. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  14. [Regional liver circulation and the scintigraphic representation of the portal circulation with 133Xe].

    Science.gov (United States)

    Kroiss, A

    1984-01-01

    Regional hepatic blood flow has been determined by 4 methods with the aid of the 133Xe washout technique: scintisplenoportography (direct application of 133Xe into the spleen by means of a thin needle); arterial method (133Xe is injected into the A. hepatica by means of a catheter); retrograde-venous method (133Xe administered by an occluding hepatic vein catheter); percutaneous intrahepatic method (133Xe administered directly into the parenchyma by means of a Chiba needle). Ad 1.: Scintisplenoportography (SSP) was executed with 97 patients: 8 patients with a healthy liver presented a hepatic blood flow of 103.37 +/- 11.5 ml/100 g/min. 4 patients with a chronic hepatitis showed a hepatic blood flow of 105.67 +/- 10.2 ml/100 g/min. In 38 patients with compensated cirrhosis, hepatic blood flow was determined with 58.15 +/- 11.5 ml/100 g/min and 19 patients with decompensated cirrhosis showed a blood flow of 34.54 +/- 7.2 ml/100 g/min. Of the 19 patients, who did not present any liver image, 2 patients suffered from a prehepatic block, 1 patient (female) from a posthepatic block, the rest were decompensated cirrhoses. In 5 patients suffering from steatosis only collateral circulation was determined and in 4 patients the spleen could not be punctured. In the patients with compensated and decompensated cirrhosis of the liver, hepatic blood flow differentiated significantly (p less than 0.001) from patients with healthy livers and chronic hepatitis. In the patients with bioptically assured steatosis only the washout constant was determined. Reproducibility of this method was tested in 4 patients and no statistical difference of hepatic blood flow values could be found and the correlation coefficient amounted to 0.9856. The advantage of SSP lies in the possibility of recording the portal vein circulation: cranial collaterals were found in 33 patients, 2 patients had caudal collaterals exclusively and 29 patients cranial and caudal collaterals. 33 cirrhosis patients

  15. Management of investment-construction projects basing on the matrix of key events

    Directory of Open Access Journals (Sweden)

    Morozenko Andrey Aleksandrovich

    2016-11-01

    Full Text Available The article considers the current problematic issues in the management of investment-construction projects, examines the questions of efficiency increase of construction operations on the basis of the formation of a reflex-adaptive organizational structure. The authors analyzed the necessity of forming a matrix of key events in the investment-construction project (ICP, which will create the optimal structure of the project, basing on the work program for its implementation. For convenience of representing programs of the project implementation in time the authors make recommendations to consolidate the works into separate, economically independent functional blocks. It is proposed to use an algorithm of forming the matrix of an investment-construction project, considering the economic independence of the functional blocks and stages of the ICP implementation. The use of extended network model is justified, which is supplemented by organizational and structural constraints at different stages of the project, highlighting key events fundamentally influencing the further course of the ICP implementation.

  16. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  17. Analysis of Block OMP using Block RIP

    OpenAIRE

    Wang, Jun; Li, Gang; Zhang, Hao; Wang, Xiqin

    2011-01-01

    Orthogonal matching pursuit (OMP) is a canonical greedy algorithm for sparse signal reconstruction. When the signal of interest is block sparse, i.e., it has nonzero coefficients occurring in clusters, the block version of OMP algorithm (i.e., Block OMP) outperforms the conventional OMP. In this paper, we demonstrate that a new notion of block restricted isometry property (Block RIP), which is less stringent than standard restricted isometry property (RIP), can be used for a very straightforw...

  18. A new coal-permeability model: Internal swelling stress and fracture-matrix interaction

    Energy Technology Data Exchange (ETDEWEB)

    Liu, H.H.; Rutqvist, J.

    2009-10-01

    We have developed a new coal-permeability model for uniaxial strain and constant confining stress conditions. The model is unique in that it explicitly considers fracture-matrix interaction during coal deformation processes and is based on a newly proposed internal-swelling stress concept. This concept is used to account for the impact of matrix swelling (or shrinkage) on fracture-aperture changes resulting from partial separation of matrix blocks by fractures that do not completely cut through the whole matrix. The proposed permeability model is evaluated with data from three Valencia Canyon coalbed wells in the San Juan Basin, where increased permeability has been observed during CH{sub 4} gas production, as well as with published data from laboratory tests. Model results are generally in good agreement with observed permeability changes. The importance of fracture-matrix interaction in determining coal permeability, demonstrated in this work using relatively simple stress conditions, underscores the need for a dual-continuum (fracture and matrix) mechanical approach to rigorously capture coal-deformation processes under complex stress conditions, as well as the coupled flow and transport processes in coal seams.

  19. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals

    Directory of Open Access Journals (Sweden)

    Pablo Soto-Quiros

    2015-01-01

    Full Text Available This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT: the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  20. Quantum spin circulator in Y junctions of Heisenberg chains

    Science.gov (United States)

    Buccheri, Francesco; Egger, Reinhold; Pereira, Rodrigo G.; Ramos, Flávia B.

    2018-06-01

    We show that a quantum spin circulator, a nonreciprocal device that routes spin currents without any charge transport, can be achieved in Y junctions of identical spin-1 /2 Heisenberg chains coupled by a chiral three-spin interaction. Using bosonization, boundary conformal field theory, and density matrix renormalization group simulations, we find that a chiral fixed point with maximally asymmetric spin conductance arises at a critical point separating a regime of disconnected chains from a spin-only version of the three-channel Kondo effect. We argue that networks of spin-chain Y junctions provide a controllable approach to construct long-sought chiral spin-liquid phases.

  1. Circulating levels of chromatin fragments are inversely correlated with anti-dsDNA antibody levels in human and murine systemic lupus erythematosus

    DEFF Research Database (Denmark)

    Jørgensen, Mariann H; Rekvig, Ole Petter; Jacobsen, Rasmus S

    2011-01-01

    Anti-dsDNA antibodies represent a central pathogenic factor in Lupus nephritis. Together with nucleosomes they deposit as immune complexes in the mesangial matrix and along basement membranes within the glomeruli. The origin of the nucleosomes and when they appear e.g. in circulation is not known...... an inverse correlation between anti-dsDNA antibodies and the DNA concentration in the circulation in both murine and human serum samples. High titer of anti-DNA antibodies in human sera correlated with reduced levels of circulating chromatin, and in lupus prone mice with deposition within glomeruli....... The inverse correlation between DNA concentration and anti-dsDNA antibodies may reflect antibody-dependent deposition of immune complexes during the development of lupus nephritis in autoimmune lupus prone mice. The measurement of circulating DNA in SLE sera by using qPCR may indicate and detect...

  2. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    International Nuclear Information System (INIS)

    Floberg, J M; Holden, J E

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)

  3. 31 CFR 595.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY TERRORISM SANCTIONS REGULATIONS General Definitions § 595.301 Blocked account; blocked property. The terms blocked account and blocked...

  4. A feasibility study for the application of seismic interferometry by multidimensional deconvolution for lithospheric-scale imaging

    Science.gov (United States)

    Ruigrok, Elmer; van der Neut, Joost; Djikpesse, Hugues; Chen, Chin-Wu; Wapenaar, Kees

    2010-05-01

    Active-source surveys are widely used for the delineation of hydrocarbon accumulations. Most source and receiver configurations are designed to illuminate the first 5 km of the earth. For a deep understanding of the evolution of the crust, much larger depths need to be illuminated. The use of large-scale active surveys is feasible, but rather costly. As an alternative, we use passive acquisition configurations, aiming at detecting responses from distant earthquakes, in combination with seismic interferometry (SI). SI refers to the principle of generating new seismic responses by combining seismic observations at different receiver locations. We apply SI to the earthquake responses to obtain responses as if there was a source at each receiver position in the receiver array. These responses are subsequently migrated to obtain an image of the lithosphere. Conventionally, SI is applied by a crosscorrelation of responses. Recently, an alternative implementation was proposed as SI by multidimensional deconvolution (MDD) (Wapenaar et al. 2008). SI by MDD compensates both for the source-sampling and the source wavelet irregularities. Another advantage is that the MDD relation also holds for media with severe anelastic losses. A severe restriction though for the implementation of MDD was the need to estimate responses without free-surface interaction, from the earthquake responses. To mitigate this restriction, Groenestijn en Verschuur (2009) proposed to introduce the incident wavefield as an additional unknown in the inversion process. As an alternative solution, van der Neut et al. (2010) showed that the required wavefield separation may be implemented after a crosscorrelation step. These last two approaches facilitate the application of MDD for lithospheric-scale imaging. In this work, we study the feasibility for the implementation of MDD when considering teleseismic wavefields. We address specific problems for teleseismic wavefields, such as long and complicated source

  5. Microwave circulator design

    CERN Document Server

    Linkhart, Douglas K

    2014-01-01

    Circulator design has advanced significantly since the first edition of this book was published 25 years ago. The objective of this second edition is to present theory, information, and design procedures that will enable microwave engineers and technicians to design and build circulators successfully. This resource contains a discussion of the various units used in the circulator design computations, as well as covers the theory of operation. This book presents numerous applications, giving microwave engineers new ideas about how to solve problems using circulators. Design examples are provided, which demonstrate how to apply the information to real-world design tasks.

  6. Matrix pentagons

    Science.gov (United States)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  7. Matrix pentagons

    Directory of Open Access Journals (Sweden)

    A.V. Belitsky

    2017-10-01

    Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  8. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  9. Linear analysis of rotationally invariant, radially variant tomographic imaging systems

    International Nuclear Information System (INIS)

    Huesmann, R.H.

    1990-01-01

    This paper describes a method to analyze the linear imaging characteristics of rotationally invariant, radially variant tomographic imaging systems using singular value decomposition (SVD). When the projection measurements from such a system are assumed to be samples from independent and identically distributed multi-normal random variables, the best estimate of the emission intensity is given by the unweighted least squares estimator. The noise amplification of this estimator is inversely proportional to the singular values of the normal matrix used to model projection and backprojection. After choosing an acceptable noise amplification, the new method can determine the number of parameters and hence the number of pixels that should be estimated from data acquired from an existing system with a fixed number of angles and projection bins. Conversely, for the design of a new system, the number of angles and projection bins necessary for a given number of pixels and noise amplification can be determined. In general, computing the SVD of the projection normal matrix has cubic computational complexity. However, the projection normal matrix for this class of rotationally invariant, radially variant systems has a block circulant form. A fast parallel algorithm to compute the SVD of this block circulant matrix makes the singular value analysis practical by asymptotically reducing the computation complexity of the method by a multiplicative factor equal to the number of angles squared

  10. Evaluation of obstructive uropathy by deconvolution analysis of {sup 99m}Tc-mercaptoacetyltriglycine ({sup 99m}Tc-MAG3) renal scintigraphic data. A comparison with diuresis renography

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Yoshiyuki [Mie Univ., Tsu (Japan). School of Medicine

    1997-06-01

    Clinical significance of ERPF (effective renal plasma flow) and MTT (mean transit time) calculated by deconvolution analysis was studied in patients with obstructive uropathy. Subjects were 84 kidneys of 38 patients and 4 people without renal abnormality (22 males and 20 females) whose age was 53.8 y in a mean. Scintigraphy was done with a Toshiba {gamma}-camera GCA-7200A equipped with a low energy-high resolution collimator with the energy width of 149 keV{+-}20% at 20 min after loading of 500 ml of water and rapidly after intravenous administration of {sup 99m}Tc-MAG3 (200 MBq). At 5 min later, blood was collected and at 10 min, furosemide was intravenously given. Plasma radioactivity was measured in a well-type scintillation counter and was used for correction of blood concentration-time curve obtained from heart area data. Split MTT, regional MTT and ERPF were calculated by deconvolution analysis. Impaired transit was judged from renogram after furosemide loading and was classified into 6 types. ERPF was found lowered in cases of obstruction and in low renal function. Regional MTT was prolonged only in the former cases. The examination with the deconvolution analysis was concluded to be widely used since it gave useful information for the treatment. (K.H.)

  11. Nerve Blocks

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Nerve Blocks A nerve block is an injection to ... the limitations of Nerve Block? What is a Nerve Block? A nerve block is an anesthetic and/ ...

  12. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  13. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  14. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  15. Acquired heart block: a possible complication of patent ductus arteriosus in a preterm infant.

    Science.gov (United States)

    Grasser, Monika; Döhlemann, Christoph; Mittal, Rashmi; Till, Holger; Dietz, Hans-Georg; Münch, Georg; Holzinger, Andreas

    2008-01-01

    A large patent ductus arteriosus (PDA) is a frequently encountered clinical problem in extremely low birth weight (ELBW) infants. It leads to an increased pulmonary blood flow and in a decreased or reversed diastolic flow in the systemic circulation, resulting in complications. Here we report a possible complication of PDA not previously published. On day 8 of life, a male ELBW infant (birth weight 650 g) born at a gestational age of 23 weeks and 3 days developed an atrioventricular block (AV block). The heart rate dropped from 168/min to 90/min, and the ECG showed a Wenckebach second-degree AV block and intraventricular conduction disturbances. Echocardiography demonstrated a PDA with a large left-to-right shunt and large left atrium and left ventricle with high contractility. Within several minutes after surgical closure of the PDA, the heart rate increased, and after 30 min the AV block had improved to a 1:1 conduction ratio. Echocardiography after 2 h revealed a significant decrease of the left ventricular and atrial dimensions. Within 12 h, the AV block completely reversed together with the intraventricular conduction disturbances. We suggest that PDA with a large left-to-right shunt and left ventricular volume overload may lead to an AV block in an ELBW infant. Surgical closure of the PDA may be indicated. (c) 2007 S. Karger AG, Basel.

  16. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  17. Ciprofloxacin blocked enterohepatic circulation of diclofenac and alleviated NSAID-induced enteropathy in rats partly by inhibiting intestinal β-glucuronidase activity

    Science.gov (United States)

    Zhong, Ze-yu; Sun, Bin-bin; Shu, Nan; Xie, Qiu-shi; Tang, Xian-ge; Ling, Zhao-li; Wang, Fan; Zhao, Kai-jing; Xu, Ping; Zhang, Mian; Li, Ying; Chen, Yang; Liu, Li; Xia, Lun-zhu; Liu, Xiao-dong

    2016-01-01

    Aim: Diclofenac is a non-steroidal anti-inflammatory drug (NSAID), which may cause serious intestinal adverse reactions (enteropathy). In this study we investigated whether co-administration of ciprofloxacin affected the pharmacokinetics of diclofenac and diclofenac-induced enteropathy in rats. Methods: The pharmacokinetics of diclofenac was assessed in rats after receiving diclofenac (10 mg/kg, ig, or 5 mg/kg, iv), with or without ciprofloxacin (20 mg/kg, ig) co-administered. After receiving 6 oral doses or 15 intravenous doses of diclofenac, the rats were sacrificed, and small intestine was removed to examine diclofenac-induced enteropathy. β-Glucuronidase activity in intestinal content, bovine liver and E coli was evaluated. Results: Following oral or intravenous administration, the pharmacokinetic profile of diclofenac displayed typical enterohepatic circulation, and co-administration of ciprofloxacin abolished the enterohepatic circulation, resulted in significant reduction in the plasma content of diclofenac. In control rats, β-glucuronidase activity in small intestinal content was region-dependent: proximal intestinediclofenac, typical enteropathy was developed with severe enteropathy occurred in distal small intestine. Co-administration of ciprofloxacin significantly alleviated diclofenac-induced enteropathy. Conclusion: Co-administration of ciprofloxacin attenuated enterohepatic circulation of diclofenac and alleviated diclofenac-induced enteropathy in rats, partly via the inhibition of intestinal β-glucuronidase activity. PMID:27180979

  18. Ciprofloxacin blocked enterohepatic circulation of diclofenac and alleviated NSAID-induced enteropathy in rats partly by inhibiting intestinal β-glucuronidase activity.

    Science.gov (United States)

    Zhong, Ze-Yu; Sun, Bin-Bin; Shu, Nan; Xie, Qiu-Shi; Tang, Xian-Ge; Ling, Zhao-Li; Wang, Fan; Zhao, Kai-Jing; Xu, Ping; Zhang, Mian; Li, Ying; Chen, Yang; Liu, Li; Xia, Lun-Zhu; Liu, Xiao-Dong

    2016-07-01

    Diclofenac is a non-steroidal anti-inflammatory drug (NSAID), which may cause serious intestinal adverse reactions (enteropathy). In this study we investigated whether co-administration of ciprofloxacin affected the pharmacokinetics of diclofenac and diclofenac-induced enteropathy in rats. The pharmacokinetics of diclofenac was assessed in rats after receiving diclofenac (10 mg/kg, ig, or 5 mg/kg, iv), with or without ciprofloxacin (20 mg/kg, ig) co-administered. After receiving 6 oral doses or 15 intravenous doses of diclofenac, the rats were sacrificed, and small intestine was removed to examine diclofenac-induced enteropathy. β-Glucuronidase activity in intestinal content, bovine liver and E coli was evaluated. Following oral or intravenous administration, the pharmacokinetic profile of diclofenac displayed typical enterohepatic circulation, and co-administration of ciprofloxacin abolished the enterohepatic circulation, resulted in significant reduction in the plasma content of diclofenac. In control rats, β-glucuronidase activity in small intestinal content was region-dependent: proximal intestinediclofenac, typical enteropathy was developed with severe enteropathy occurred in distal small intestine. Co-administration of ciprofloxacin significantly alleviated diclofenac-induced enteropathy. Co-administration of ciprofloxacin attenuated enterohepatic circulation of diclofenac and alleviated diclofenac-induced enteropathy in rats, partly via the inhibition of intestinal β-glucuronidase activity.

  19. Seasonal overturning circulation in the Red Sea: 2. Winter circulation

    Science.gov (United States)

    Yao, Fengchao; Hoteit, Ibrahim; Pratt, Larry J.; Bower, Amy S.; Köhl, Armin; Gopalakrishnan, Ganesh; Rivas, David

    2014-04-01

    The shallow winter overturning circulation in the Red Sea is studied using a 50 year high-resolution MITgcm (MIT general circulation model) simulation with realistic atmospheric forcing. The overturning circulation for a typical year, represented by 1980, and the climatological mean are analyzed using model output to delineate the three-dimensional structure and to investigate the underlying dynamical mechanisms. The horizontal model circulation in the winter of 1980 is dominated by energetic eddies. The climatological model mean results suggest that the surface inflow intensifies in a western boundary current in the southern Red Sea that switches to an eastern boundary current north of 24°N. The overturning is accomplished through a cyclonic recirculation and a cross-basin overturning circulation in the northern Red Sea, with major sinking occurring along a narrow band of width about 20 km along the eastern boundary and weaker upwelling along the western boundary. The northward pressure gradient force, strong vertical mixing, and horizontal mixing near the boundary are the essential dynamical components in the model's winter overturning circulation. The simulated water exchange is not hydraulically controlled in the Strait of Bab el Mandeb; instead, the exchange is limited by bottom and lateral boundary friction and, to a lesser extent, by interfacial friction due to the vertical viscosity at the interface between the inflow and the outflow.

  20. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  1. A Recursive Formulation of Cholesky Factorization of a Matrix in Packed Storage Format

    DEFF Research Database (Denmark)

    Andersen, Bjarne Stig; Gustavson, Fred; Wasniewski, Jerzy

    2001-01-01

    . Algorithm RPC is based on level-3 BLAS and requires variants of algorithms {\\$\\backslash\\$bf TRSM} and {\\$\\backslash\\$bf SYRK} that work on RPF. We call these {\\$\\backslash\\$bf RP\\$\\backslash\\$\\_TRSM} and {\\$\\backslash\\$bf RP\\$\\backslash\\$\\_SYRK} and find that they do most of their work by calling...... matrix. Second, RPC gives a level-3 implementation of Cholesky factorization whereas standard packed implementations are only level 2. Hence, the performance of our RPC implementation is decidedly superior. Third, unlike fixed block size algorithms, RPC requires no block size tuning parameter. We present...

  2. Circulation pump mounting

    International Nuclear Information System (INIS)

    Skalicky, A.

    1976-01-01

    The suspension is described of nuclear reactor circulating pumps enabling their dilatation with a minimum reverse force consisting of spacing rods supported with one end in the anchor joints and provided with springs and screw joints engaging the circulating pump shoes. The spacing rods are equipped with side vibration dampers anchored in the shaft side wall and on the body of the circulating pump drive body. The negative reverse force F of the spacing rods is given by the relation F=Q/l.y, where Q is the weight of the circulating pump, l is the spatial distance between the shoe joints and anchor joints, and y is the deflection of the circulating pump vertical axis from the mean equilibrium position. The described suspension is advantageous in that that the reverse force for the deflection from the mean equilibrium position is minimal, dynamic behaviour is better, and construction costs are lower compared to suspension design used so far. (J.B.)

  3. Nanoporous materials from stable and metastable structures of 1,2-PB-b-PDMS block copolymers

    DEFF Research Database (Denmark)

    Schulte, Lars; Grydgaard, Anne; Jakobsen, Mathilde R.

    2011-01-01

    matrix component) and secondly degrading PDMS (the expendable component). Depending on the temperature of the cross-linking reaction different morphologies can be ‘frozen’ from the same block copolymer. Starting with a block copolymer precursor of lamellar morphology at room temperature, the gyroid...... structure or a metastable structure showing hexagonal symmetry (probably HPL) were permanently captured by cross-linking the precursor at 140 °C or at 85 °C, respectively. PDMS was degraded by reaction with tetrabutylamonium fluoride; considerations on the mechanism of cleaving reaction are presented...

  4. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    International Nuclear Information System (INIS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L; Soares, Edward J; Lemahieu, Ignace; Glick, Stephen J

    2006-01-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast

  5. Tumor-Associated Macrophages Derived from Circulating Inflammatory Monocytes Degrade Collagen through Cellular Uptake

    DEFF Research Database (Denmark)

    Madsen, Daniel Hargbøl; Jürgensen, Henrik Jessen; Siersbæk, Majken Storm

    2017-01-01

    -associated macrophage (TAM)-like cells that degrade collagen in a mannose receptor-dependent manner. Accordingly, mannose-receptor-deficient mice display increased intratumoral collagen. Whole-transcriptome profiling uncovers a distinct extracellular matrix-catabolic signature of these collagen-degrading TAMs. Lineage......-ablation studies reveal that collagen-degrading TAMs originate from circulating CCR2+ monocytes. This study identifies a function of TAMs in altering the tumor microenvironment through endocytic collagen turnover and establishes macrophages as centrally engaged in tumor-associated collagen degradation. Madsen et...

  6. Circulating levels of chromatin fragments are inversely correlated with anti-dsDNA antibody levels in human and murine systemic lupus erythematosus

    DEFF Research Database (Denmark)

    Jørgensen, Mariann H; Rekvig, Ole Petter; Jacobsen, Rasmus S

    2011-01-01

    Anti-dsDNA antibodies represent a central pathogenic factor in Lupus nephritis. Together with nucleosomes they deposit as immune complexes in the mesangial matrix and along basement membranes within the glomeruli. The origin of the nucleosomes and when they appear e.g. in circulation is not known...

  7. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan; Pasciak, Joseph E.; Sirenko, Kostyantyn

    2014-01-01

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  8. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan

    2014-11-11

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  9. Can preferred atmospheric circulation patterns over the North-Atlantic-Eurasian region be associated with arctic sea ice loss?

    Science.gov (United States)

    Crasemann, Berit; Handorf, Dörthe; Jaiser, Ralf; Dethloff, Klaus; Nakamura, Tetsu; Ukita, Jinro; Yamazaki, Koji

    2017-12-01

    In the framework of atmospheric circulation regimes, we study whether the recent Arctic sea ice loss and Arctic Amplification are associated with changes in the frequency of occurrence of preferred atmospheric circulation patterns during the extended winter season from December to March. To determine regimes we applied a cluster analysis to sea-level pressure fields from reanalysis data and output from an atmospheric general circulation model. The specific set up of the two analyzed model simulations for low and high ice conditions allows for attributing differences between the simulations to the prescribed sea ice changes only. The reanalysis data revealed two circulation patterns that occur more frequently for low Arctic sea ice conditions: a Scandinavian blocking in December and January and a negative North Atlantic Oscillation pattern in February and March. An analysis of related patterns of synoptic-scale activity and 2 m temperatures provides a synoptic interpretation of the corresponding large-scale regimes. The regimes that occur more frequently for low sea ice conditions are resembled reasonably well by the model simulations. Based on those results we conclude that the detected changes in the frequency of occurrence of large-scale circulation patterns can be associated with changes in Arctic sea ice conditions.

  10. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  11. A zonally symmetric model for the monsoon-Hadley circulation with stochastic convective forcing

    Science.gov (United States)

    De La Chevrotière, Michèle; Khouider, Boualem

    2017-02-01

    Idealized models of reduced complexity are important tools to understand key processes underlying a complex system. In climate science in particular, they are important for helping the community improve our ability to predict the effect of climate change on the earth system. Climate models are large computer codes based on the discretization of the fluid dynamics equations on grids of horizontal resolution in the order of 100 km, whereas unresolved processes are handled by subgrid models. For instance, simple models are routinely used to help understand the interactions between small-scale processes due to atmospheric moist convection and large-scale circulation patterns. Here, a zonally symmetric model for the monsoon circulation is presented and solved numerically. The model is based on the Galerkin projection of the primitive equations of atmospheric synoptic dynamics onto the first modes of vertical structure to represent free tropospheric circulation and is coupled to a bulk atmospheric boundary layer (ABL) model. The model carries bulk equations for water vapor in both the free troposphere and the ABL, while the processes of convection and precipitation are represented through a stochastic model for clouds. The model equations are coupled through advective nonlinearities, and the resulting system is not conservative and not necessarily hyperbolic. This makes the design of a numerical method for the solution of this system particularly difficult. Here, we develop a numerical scheme based on the operator time-splitting strategy, which decomposes the system into three pieces: a conservative part and two purely advective parts, each of which is solved iteratively using an appropriate method. The conservative system is solved via a central scheme, which does not require hyperbolicity since it avoids the Riemann problem by design. One of the advective parts is a hyperbolic diagonal matrix, which is easily handled by classical methods for hyperbolic equations, while

  12. Stabilization and augmentation of circulating AIM in mice by synthesized IgM-Fc.

    Directory of Open Access Journals (Sweden)

    Toshihiro Kai

    Full Text Available Owing to rapid and drastic changes in lifestyle and eating habits in modern society, obesity and obesity-associated diseases are among the most important public health problems. Hence, the development of therapeutic approaches to regulate obesity is strongly desired. In view of previous work showing that apoptosis inhibitor of macrophage (AIM blocks lipid storage in adipocytes, thereby preventing obesity caused by a high-fat diet, we here explored a strategy to augment circulating AIM levels. We synthesized the Fc portion of the soluble human immunoglobulin (IgM heavy chain and found that it formed a pentamer containing IgJ as natural IgM does, and effectively associated with AIM in vitro. When we injected the synthesized Fc intravenously into mice lacking circulating IgM, it associated with endogenous mouse AIM, protecting AIM from renal excretion and preserving the circulating AIM levels. As the synthesized Fc lacked the antigen-recognizing variable region, it provoked no undesired immune response. In addition, a challenge with the Fc-human AIM complex in wild-type mice, which exhibited normal levels of circulating IgM and AIM, successfully maintained the levels of the human AIM in mouse blood. We also observed that the human AIM was effectively incorporated into adipocytes in visceral fat tissue, suggesting its functionality against obesity. Thus, our findings reveal potent strategies to safely increase AIM levels, which could form the basis for developing novel therapies for obesity.

  13. Hollow ZIF-8 Nanoworms from Block Copolymer Templates

    Science.gov (United States)

    Yu, Haizhou; Qiu, Xiaoyan; Neelakanda, Pradeep; Deng, Lin; Khashab, Niveen M.; Nunes, Suzana P.; Peinemann, Klaus-Viktor

    2015-10-01

    Recently two quite different types of “nano-containers” have been recognized as attractive potential drug carriers; these are wormlike filamenteous micelles (“filomicelles”) on the one hand and metal organic frameworks on the other hand. In this work we combine these two concepts. We report for the first time the manufacturing of metal organic framework nanotubes with a hollow core. These worm-like tubes are about 200 nm thick and several μm long. The preparation is simple: we first produce long and flexible filament-shaped micelles by block copolymer self-assembly. These filomicelles serve as templates to grow a very thin layer of interconnected ZIF-8 crystals on their surface. Finally the block copolymer is removed by solvent extraction and the hollow ZIF-8 nanotubes remain. These ZIF-NTs are surprisingly stable and withstand purification by centrifugation. The synthesis method is straightforward and can easily be applied for other metal organic framework materials. The ZIF-8 NTs exhibit high loading capacity for the model anti cancer drug doxorubicin (DOX) with a pH-triggered release. Hence, a prolonged circulation in the blood stream and a targeted drug release behavior can be expected.

  14. Hollow ZIF-8 Nanoworms from Block Copolymer Templates

    KAUST Repository

    Yu, Haizhou; Qiu, Xiaoyan; Neelakanda, Pradeep; Deng, Lin; Khashab, Niveen M.; Nunes, Suzana Pereira; Peinemann, Klaus-Viktor

    2015-01-01

    Recently two quite different types of “nano-containers” have been recognized as attractive potential drug carriers; these are wormlike filamenteous micelles (“filomicelles”) on the one hand and metal organic frameworks on the other hand. In this work we combine these two concepts. We report for the first time the manufacturing of metal organic framework nanotubes with a hollow core. These worm-like tubes are about 200 nm thick and several μm long. The preparation is simple: we first produce long and flexible filament-shaped micelles by block copolymer self-assembly. These filomicelles serve as templates to grow a very thin layer of interconnected ZIF-8 crystals on their surface. Finally the block copolymer is removed by solvent extraction and the hollow ZIF-8 nanotubes remain. These ZIF-NTs are surprisingly stable and withstand purification by centrifugation. The synthesis method is straightforward and can easily be applied for other metal organic framework materials. The ZIF-8 NTs exhibit high loading capacity for the model anti cancer drug doxorubicin (DOX) with a pH-triggered release. Hence, a prolonged circulation in the blood stream and a targeted drug release behavior can be expected.

  15. Hollow ZIF-8 Nanoworms from Block Copolymer Templates

    KAUST Repository

    Yu, Haizhou

    2015-10-16

    Recently two quite different types of “nano-containers” have been recognized as attractive potential drug carriers; these are wormlike filamenteous micelles (“filomicelles”) on the one hand and metal organic frameworks on the other hand. In this work we combine these two concepts. We report for the first time the manufacturing of metal organic framework nanotubes with a hollow core. These worm-like tubes are about 200 nm thick and several μm long. The preparation is simple: we first produce long and flexible filament-shaped micelles by block copolymer self-assembly. These filomicelles serve as templates to grow a very thin layer of interconnected ZIF-8 crystals on their surface. Finally the block copolymer is removed by solvent extraction and the hollow ZIF-8 nanotubes remain. These ZIF-NTs are surprisingly stable and withstand purification by centrifugation. The synthesis method is straightforward and can easily be applied for other metal organic framework materials. The ZIF-8 NTs exhibit high loading capacity for the model anti cancer drug doxorubicin (DOX) with a pH-triggered release. Hence, a prolonged circulation in the blood stream and a targeted drug release behavior can be expected.

  16. [Regional liver circulation and scintigraphic imaging of portal circulation with 133Xe].

    Science.gov (United States)

    Kroiss, A

    1984-01-01

    Regional hepatic blood flow has been determined by 4 methods with the aid of the 133Xe washout technique: scintisplenoportography (direct application of 133Xe into the spleen by means of a thin needle); arterial method (133Xe is injected into the A. hepatica by means of a catheter); retrograde-venous method (133Xe administered by an occluding hepatic vein catheter); percutaneous intrahepatic method (133Xe administered directly into the parenchyma by means of a Chiba needle). Ad 1.: Scintisplenoportography (SSP) was executed with 97 patients: 8 patients with a healthy liver presented a hepatic blood flow of 103.37 +/- 11.5 ml/100 g/min. 4 patients with a chronic hepatitis showed a hepatic blood flow of 105.67 +/- 10.2 ml/100 g/min. In 38 patients with compensated cirrhosis, hepatic blood flow was determined with 58.15 +/- 11.5 ml/100 g/min and 19 patients with decompensated cirrhosis showed a blood flow of 34.54 +/- 7.2 ml/100 g/min. Of the 19 patients, who did not present any liver image, 2 patients suffered from a prehepatic block, 1 patient (female) from a posthepatic block, the rest were decompensated cirrhoses. In 5 patients suffering from steatosis only collateral circulation was determined and in 4 patients the spleen could not be punctured. In the patients with compensated and decompensated cirrhosis of the liver, hepatic blood flow differentiated significantly (p less than 0.001) from patients with healthy livers and chronic hepatitis. In the patients with bioptically assured steatosis only the washout constant was determined. Reproducibility of this method was tested in 4 patients and no statistical difference of hepatic blood flow values could be found and the correlation coefficient amounted to 0.9856. The advantage of SSP lies in the possibility of recording the portal vein circulation: cranial collaterals were found in 33 patients, 2 patients had caudal collaterals exclusively and 29 patients cranial and caudal collaterals. 33 cirrhosis patients

  17. Investigation of the lithosphere of the Texas Gulf Coast using phase-specific Ps receiver functions produced by wavefield iterative deconvolution

    Science.gov (United States)

    Gurrola, H.; Berdine, A.; Pulliam, J.

    2017-12-01

    Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.

  18. Dynamic swelling of tunable full-color block copolymer photonic gels via counterion exchange.

    Science.gov (United States)

    Lim, Ho Sun; Lee, Jae-Hwang; Walish, Joseph J; Thomas, Edwin L

    2012-10-23

    One-dimensionally periodic block copolymer photonic lamellar gels with full-color tunability as a result of a direct exchange of counteranions were fabricated via a two-step procedure comprising the self-assembly of a hydrophobic block-hydrophilic polyelectrolyte block copolymer, polystyrene-b-poly(2-vinyl pyridine) (PS-b-P2VP), followed by sequential quaternization of the P2VP layers in 1-bromoethane solution. Depending on the hydration characteristics of each counteranion, the selective swelling of the block copolymer lamellar structures leads to large tunability of the photonic stop band from blue to red wavelengths. More extensive quaternization of the P2VP block allows the photonic lamellar gels to swell more and red shift to longer wavelength. Here, we investigate the dynamic swelling behavior in the photonic gel films through time-resolved in situ measurement of UV-vis transmission. We model the swelling behavior using the transfer matrix method based on the experimentally observed reflectivity data with substitution of appropriate counterions. These tunable structural color materials may be attractive for numerous applications such as high-contrast displays without using a backlight, color filters, and optical mirrors for flexible lasing.

  19. A controlled release system for proteins based on poly(ether ester) block-copolymers: polymer network characterization

    NARCIS (Netherlands)

    Bezemer, J.M.; Grijpma, Dirk W.; Dijkstra, Pieter J.; van Blitterswijk, Clemens; Feijen, Jan

    1999-01-01

    The properties of a series of multiblock copolymers, based on hydrophilic poly(ethylene glycol) (PEG) and hydrophobic poly(butylene terephthalate) (PBT) blocks were investigated with respect to their application as a matrix for controlled release of proteins. The degree of swelling, Q, of the

  20. Cutaneous Sensory Block Area, Muscle-Relaxing Effect, and Block Duration of the Transversus Abdominis Plane Block

    DEFF Research Database (Denmark)

    Støving, Kion; Rothe, Christian; Rosenstock, Charlotte V

    2015-01-01

    BACKGROUND AND OBJECTIVES: The transversus abdominis plane (TAP) block is a widely used nerve block. However, basic block characteristics are poorly described. The purpose of this study was to assess the cutaneous sensory block area, muscle-relaxing effect, and block duration. METHODS: Sixteen...... healthy volunteers were randomized to receive an ultrasound-guided unilateral TAP block with 20 mL 7.5 mg/mL ropivacaine and placebo on the contralateral side. Measurements were performed at baseline and 90 minutes after performing the block. Cutaneous sensory block area was mapped and separated...... into a medial and lateral part by a vertical line through the anterior superior iliac spine. We measured muscle thickness of the 3 lateral abdominal muscle layers with ultrasound in the relaxed state and during maximal voluntary muscle contraction. The volunteers reported the duration of the sensory block...

  1. Block diagrams and the cancellation of divergencies in energy-level perturbation theory

    International Nuclear Information System (INIS)

    Michels, M.A.J.; Suttorp, L.G.

    1979-01-01

    The effective Hamiltonian for the degenerate energy-eigenvalue problem in adiabatic perturbation theory is cast in a form that permits an expansion in Feynman diagrams. By means of a block representation a resummation of these diagrams is carried out such that in the adiabatic limit no divergencies are encountered. The resummed form of the effective Hamiltonian is used to establish a connexion with the S matrix. (Auth.)

  2. Dynamic simulation of a circulating fluidized bed boiler system part I: Description of the dynamic system and transient behavior of sub-models

    International Nuclear Information System (INIS)

    Kim, Seong Il; Choi, Sang Min; Yang, Jong In

    2016-01-01

    Dynamic performance simulation of a CFB boiler in a commercial-scale power plant is reported. The boiler system was modeled by a finite number of heat exchanger units, which are sub-grouped into the gas-solid circulation loop, the water-steam circulation loop, and the inter-connected heat exchangers blocks of the boiler. This dynamic model is an extension from the previously reported performance simulation model, which was designed to simulate static performance of the same power plant, where heat and mass for each of the heat exchanger units were balanced for the inter-connected heat exchanger network among the fuel combustion system and the water-steam system. Dynamic performance simulation was achieved by calculating the incremental difference from the previous time step, and progressing for the next time step. Additional discretization of the heat exchanger blocks was necessary to accommodate the dynamic response of the water evaporation and natural circulation as well as the transient response of the metal temperature of the heat exchanger elements. Presentation of the simulation modeling is organized into two parts; system configuration of the model plant and the general approach of the simulation are presented along with the transient behavior of the sub-models in Part I. Dynamic sub-models were integrated in terms of the mass flow and the heat transfer for simulating the CFB boiler system. Dynamic simulation for the open loop response was performed to check the integrated system of the water-steam loop and the solid-gas loop of the total boiler system. Simulation of the total boiler system which includes the closed-loop control system blocks is presented in the following Part II

  3. Dynamic simulation of a circulating fluidized bed boiler system part I: Description of the dynamic system and transient behavior of sub-models

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seong Il; Choi, Sang Min; Yang, Jong In [Dept. of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-12-15

    Dynamic performance simulation of a CFB boiler in a commercial-scale power plant is reported. The boiler system was modeled by a finite number of heat exchanger units, which are sub-grouped into the gas-solid circulation loop, the water-steam circulation loop, and the inter-connected heat exchangers blocks of the boiler. This dynamic model is an extension from the previously reported performance simulation model, which was designed to simulate static performance of the same power plant, where heat and mass for each of the heat exchanger units were balanced for the inter-connected heat exchanger network among the fuel combustion system and the water-steam system. Dynamic performance simulation was achieved by calculating the incremental difference from the previous time step, and progressing for the next time step. Additional discretization of the heat exchanger blocks was necessary to accommodate the dynamic response of the water evaporation and natural circulation as well as the transient response of the metal temperature of the heat exchanger elements. Presentation of the simulation modeling is organized into two parts; system configuration of the model plant and the general approach of the simulation are presented along with the transient behavior of the sub-models in Part I. Dynamic sub-models were integrated in terms of the mass flow and the heat transfer for simulating the CFB boiler system. Dynamic simulation for the open loop response was performed to check the integrated system of the water-steam loop and the solid-gas loop of the total boiler system. Simulation of the total boiler system which includes the closed-loop control system blocks is presented in the following Part II.

  4. A soft and conductive PDMS-PEG block copolymer as a compliant electrode for dielectric elastomers

    DEFF Research Database (Denmark)

    A Razak, Aliff Hisyam; Szabo, Peter; Skov, Anne Ladegaard

    Conductive PDMS-PEG block copolymers (Mn = 3 – 5 kg/mol) were chain-extended (Mn = 30 – 45 kg/mol) using hydrosilylation reaction as presented in figure 1. Subsequently, the extended copolymers were added to a conductive nano-filler (multi-walled carbon nanotubes – MWCNTs) in order to enhance...... conductivity. The combination of soft chainextended PDMS-PEG block copolymers and conductive MWCNTs results in a soft and conductive block copolymer composite which potentially can be used as a compliant and highly stretchable electrode for dielectric elastomers. The addition of MWCNTs into the PDMS-PEG matrix...... MWCNTs is 10-3 S/cm compared to 10-1 S/cm of a non-stretchable reference conducting silicone elastomer (LR3162 from Wacker). Furthermore, PDMS-PEG block copolymer with 4 phr MWCNTs (Young’s modulus, Y = 0.26 MPa) is softer and more stretchable thanLR3162 (Y = 1.17 MPa)....

  5. Self-Assembly and Crystallization of Conjugated Block Copolymers

    Science.gov (United States)

    Davidson, Emily Catherine

    This dissertation demonstrates the utility of molecular design in conjugated polymers to create diblock copolymers that robustly self-assemble in the melt and confine crystallization upon cooling. This work leverages the model conjugated polymer poly(3-(2'-ethyl)hexylthiophene) (P3EHT), which features a branched side chain, resulting in a dramatically reduced melting temperature (Tm 80°C) relative to the widely-studied poly(3-hexylthiophene) (P3HT) (Tm 200°C). This reduced melting temperature permits an accessible melt phase, without requiring that the segregation strength (chiN) be dramatically increased. Thus, diblock copolymers containing P3EHT demonstrate robust diblock copolymer self-assembly in the melt over a range of compositions and morphologies. Furthermore, confined crystallization in the case of both glassy (polystyrene (PS) matrix block) and soft (polymethylacrylate (PMA) matrix block) confinement is studied, with the finding that even in soft confinement, crystallization is constrained within the diblock microdomains. This success demonstrates the strategy of leveraging molecular design to decrease the driving force for crystallization as a means to achieving robust self-assembly and confined crystallization in conjugated block copolymers. Importantly, despite the relatively flexible nature of P3EHT in the melt, the diblock copolymer phase behavior appears to be significantly impacted by the stiffness (persistence length of 3 nm) of the P3EHT chain compared to the coupled amorphous blocks (persistence length 0.7 nm). In particular, it is shown that the synthesized morphologies are dominated by a very large composition window for lamellar geometries (favored at high P3EHT volume fractions); cylindrical geometries are favored when P3EHT is the minority fraction. This asymmetry of the composition window is attributed to impact of conformational asymmetry (the difference in chain stiffness, as opposed to shape) between conjugated and amorphous blocks

  6. 31 CFR 594.301 - Blocked account; blocked property.

    Science.gov (United States)

    2010-07-01

    ... (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY GLOBAL TERRORISM SANCTIONS REGULATIONS General Definitions § 594.301 Blocked account; blocked property. The terms blocked account and...

  7. Application of seeding and automatic differentiation in a large scale ocean circulation model

    Directory of Open Access Journals (Sweden)

    Frode Martinsen

    2005-07-01

    Full Text Available Computation of the Jacobian in a 3-dimensional general ocean circulation model is considered in this paper. The Jacobian matrix considered in this paper is square, large and sparse. When a large and sparse Jacobian is being computed, proper seeding is essential to reduce computational times. This paper presents a manually designed seeding motivated by the Arakawa-C staggered grid, and gives results for the manually designed seeding as compated to identity seeding and optimal seeding. Finite differences are computed for reference.

  8. On The Determinant of q-Distance Matrix of a Graph

    Directory of Open Access Journals (Sweden)

    Li Hong-Hai

    2014-02-01

    Full Text Available In this note, we show how the determinant of the q-distance matrix Dq(T of a weighted directed graph G can be expressed in terms of the corresponding determinants for the blocks of G, and thus generalize the results obtained by Graham et al. [R.L. Graham, A.J. Hoffman and H. Hosoya, On the distance matrix of a directed graph, J. Graph Theory 1 (1977 85-88]. Further, by means of the result, we determine the determinant of the q-distance matrix of the graph obtained from a connected weighted graph G by adding the weighted branches to G, and so generalize in part the results obtained by Bapat et al. [R.B. Bapat, S. Kirkland and M. Neumann, On distance matrices and Laplacians, Linear Algebra Appl. 401 (2005 193- 209]. In particular, as a consequence, determinantal formulae of q-distance matrices for unicyclic graphs and one class of bicyclic graphs are presented.

  9. Functional porous structures based on the pyrolysis of cured templates of block copolymer and phenolic resin

    NARCIS (Netherlands)

    Kosonen, H; Valkama, S; Nykanen, A; Toivanen, M; ten Brinke, G; Ruokolainen, J; Ikkala, O; Nykänen, Antti

    2006-01-01

    Porous materials with controlled pore size and large surface area (see Figure) have been prepared by crosslinking phenolic resin in the presence of a self-assembled block-copolymer template, followed by pyrolysis. Many phenolic hydroxyl groups remain at the matrix and pore walls, which can be used

  10. Synthesis of Inorganic Nanocomposites by Selective Introduction of Metal Complexes into a Self-Assembled Block Copolymer Template

    Directory of Open Access Journals (Sweden)

    Hiroaki Wakayama

    2015-01-01

    Full Text Available Inorganic nanocomposites have characteristic structures that feature expanded interfaces, quantum effects, and resistance to crack propagation. These structures are promising for the improvement of many materials including thermoelectric materials, photocatalysts, and structural materials. Precise control of the inorganic nanocomposites’ morphology, size, and chemical composition is very important for these applications. Here, we present a novel fabrication method to control the structures of inorganic nanocomposites by means of a self-assembled block copolymer template. Different metal complexes were selectively introduced into specific polymer blocks of the block copolymer, and subsequent removal of the block copolymer template by oxygen plasma treatment produced hexagonally packed porous structures. In contrast, calcination removal of the block copolymer template yielded nanocomposites consisting of metallic spheres in a matrix of a metal oxide. These results demonstrate that different nanostructures can be created by selective use of processes to remove the block copolymer templates. The simple process of first mixing block copolymers and magnetic nanomaterial precursors and then subsequently removing the block copolymer template enables structural control of magnetic nanomaterials, which will facilitate their applicability in patterned media, including next-generation perpendicular magnetic recording media.

  11. Matrix of transmission in structural dynamics

    International Nuclear Information System (INIS)

    Mukherjee, S.

    1975-01-01

    Within the last few years numerous papers have been published on the subject of matrix method in elasto-mechanics. 'Matrix of Transmission' is one of the methods in this field which has gained considerable attention in recent years. The basic philosophy adopted in this method is based on the idea of breaking up a complicated system into component parts with simple elastic and dynamic properties which can be readily expressed in matrix form. These component matrices are considered as building blocks, which are fitted together according to a set of predetermined rules which then provide the static and dynamic properties of the entire system. A common type of system occuring in engineering practice consists of a number of elements linked together end to end in the form of a chain. The 'Transfer Matrix' is ideally suited for such a system, because only successive multiplication is necessary to connect these elements together. The number of degrees of freedom and intermediate conditions present no difficulty. Although the 'Transfer Matrix' method is suitable for the treatment of branched and coupled systems its application to systems which do not have predominant chain topology is not effective. Apart from the requirement that the system be linearely elastic, no other restrictions are made. In this paper, it is intended to give a general outline and theoretical formulation of 'Transfer Matrix' and then its application to actual problems in structural dynamics related to seismic analysis. The natural frequencies of a freely vibrating elastic system can be found by applying proper end conditions. The end conditions will yield the frequency determinate to zero. By using a suitable numerical method, the natural frequencies and mode shapes are determined by making a frequency sweep within the range of interest. Results of an analysis of a typical nuclear building by this method show very close agreement with the results obtained by using ASKA and SAP IV program. Therefore

  12. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  13. Second level semi-degenerate fields in W{sub 3} Toda theory: matrix element and differential equation

    Energy Technology Data Exchange (ETDEWEB)

    Belavin, Vladimir [I.E. Tamm Department of Theoretical Physics, P.N. Lebedev Physical Institute,Leninsky Avenue 53, 119991 Moscow (Russian Federation); Department of Quantum Physics, Institute for Information Transmission Problems,Bolshoy Karetny per. 19, 127994 Moscow (Russian Federation); Moscow Institute of Physics and Technology,Dolgoprudnyi, 141700 Moscow region (Russian Federation); Cao, Xiangyu [LPTMS, CNRS (UMR 8626), Université Paris-Saclay,15 rue Georges Clémenceau, 91405 Orsay (France); Estienne, Benoit [LPTHE, CNRS and Université Pierre et Marie Curie, Sorbonne Universités,4 Place Jussieu, 75252 Paris Cedex 05 (France); Santachiara, Raoul [LPTMS, CNRS (UMR 8626), Université Paris-Saclay,15 rue Georges Clémenceau, 91405 Orsay (France)

    2017-03-02

    In a recent study we considered W{sub 3} Toda 4-point functions that involve matrix elements of a primary field with the highest-weight in the adjoint representation of sl{sub 3}. We generalize this result by considering a semi-degenerate primary field, which has one null vector at level two. We obtain a sixth-order Fuchsian differential equation for the conformal blocks. We discuss the presence of multiplicities, the matrix elements and the fusion rules.

  14. ["Habitual" left branch block alternating with 2 "disguised" bracnch block].

    Science.gov (United States)

    Lévy, S; Jullien, G; Mathieu, P; Mostefa, S; Gérard, R

    1976-10-01

    Two cases of alternating left bundle branch block and "masquerading block" (with left bundle branch morphology in the stnadard leads and right bundle branch block morphology in the precordial leads) were studied by serial tracings and his bundle electrocardiography. In case 1 "the masquerading" block was associated with a first degree AV block related to a prolongation of HV interval. This case is to our knowledge the first cas of alternating bundle branch block in which his bundle activity was recorded in man. In case 2, the patient had atrial fibrilation and His bundle recordings were performed while differents degrees of left bundle branch block were present: The mechanism of the alternation and the concept of "masquerading" block are discussed. It is suggested that this type of block represents a right bundle branch block associated with severe lesions of the "left system".

  15. Levels of Circulating MMCN-151, a Degradation Product of Mimecan, Reflect Pathological Extracellular Matrix Remodeling in Apolipoprotein E Knockout Mice

    DEFF Research Database (Denmark)

    Barascuk, N; Vassiliadis, E; Zheng, Qiuju

    2011-01-01

    Arterial extracellular matrix (ECM) remodeling by matrix metalloproteinases (MMPs) is one of the major hallmarks of atherosclerosis. Mimecan, also known as osteoglycin has been implicated in the integrity of the ECM. This study assessed the validity of an enzyme-linked immunosorbent assay (ELISA...

  16. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  17. Seasonal overturning circulation in the Red Sea: 2. Winter circulation

    KAUST Repository

    Yao, Fengchao

    2014-04-01

    The shallow winter overturning circulation in the Red Sea is studied using a 50 year high-resolution MITgcm (MIT general circulation model) simulation with realistic atmospheric forcing. The overturning circulation for a typical year, represented by 1980, and the climatological mean are analyzed using model output to delineate the three-dimensional structure and to investigate the underlying dynamical mechanisms. The horizontal model circulation in the winter of 1980 is dominated by energetic eddies. The climatological model mean results suggest that the surface inflow intensifies in a western boundary current in the southern Red Sea that switches to an eastern boundary current north of 24N. The overturning is accomplished through a cyclonic recirculation and a cross-basin overturning circulation in the northern Red Sea, with major sinking occurring along a narrow band of width about 20 km along the eastern boundary and weaker upwelling along the western boundary. The northward pressure gradient force, strong vertical mixing, and horizontal mixing near the boundary are the essential dynamical components in the model\\'s winter overturning circulation. The simulated water exchange is not hydraulically controlled in the Strait of Bab el Mandeb; instead, the exchange is limited by bottom and lateral boundary friction and, to a lesser extent, by interfacial friction due to the vertical viscosity at the interface between the inflow and the outflow. Key Points Sinking occurs in a narrow boundary layer along the eastern boundary Surface western boundary current switches into an eastern boundary current Water exchange in the Strait of Bab el Mandeb is not hydraulically controlled © 2014. American Geophysical Union. All Rights Reserved.

  18. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  19. E-Block: A Tangible Programming Tool with Graphical Blocks

    Directory of Open Access Journals (Sweden)

    Danli Wang

    2013-01-01

    Full Text Available This paper designs a tangible programming tool, E-Block, for children aged 5 to 9 to experience the preliminary understanding of programming by building blocks. With embedded artificial intelligence, the tool defines the programming blocks with the sensors as the input and enables children to write programs to complete the tasks in the computer. The symbol on the programming block's surface is used to help children understanding the function of each block. The sequence information is transferred to computer by microcomputers and then translated into semantic information. The system applies wireless and infrared technologies and provides user with feedbacks on both screen and programming blocks. Preliminary user studies using observation and user interview methods are shown for E-Block's prototype. The test results prove that E-Block is attractive to children and easy to learn and use. The project also highlights potential advantages of using single chip microcomputer (SCM technology to develop tangible programming tools for children.

  20. Strong signatures of high-latitude blocks and subtropical ridges in winter PM10 over Europe

    Science.gov (United States)

    Ordonez, C.; Garrido-Perez, J. M.; Garcia-Herrera, R.

    2017-12-01

    Atmospheric blocking is associated with persistent, slow-moving high pressure systems that interrupt the eastward progress of extratropical storm systems at middle and high latitudes. Subtropical ridges are low latitude structures manifested as bands of positive geopotential height anomalies extending from sub-tropical latitudes towards extra-tropical regions. We have quantified the impact of blocks and ridges on daily PM10 (particulate matter ≤ 10 µm) observations obtained from the European Environment Agency's air quality database (AirBase) for the winter period of 2000-2010. For this purpose, the response of the PM10 concentrations to the location of blocks and ridges with centres in two main longitudinal sectors (Atlantic, ATL, 30˚-0˚ W; European, EUR, 0˚-30˚ E) is examined. EUR blocking is associated with a collapse of the boundary layer as well as reduced wind speeds and precipitation occurrence, yielding large positive anomalies which average 12 µg m-3 over the whole continent. Conversely, the enhanced zonal flow around 50˚-60˚ N and the increased occurrence of precipitation over northern-central Europe on days with ATL ridges favour the ventilation of the boundary layer and the impact of washout processes, reducing PM10 concentrations on average by around 8 µg m-3. The presence of EUR blocks is also concurrent with an increased probability of exceeding the European air quality target (50 µg m-3 for 24-h averaged PM10) and the local 90th percentiles for this pollutant at many sites, while the opposite effect is found for ridges. In addition, the effect of synoptic persistence on the PM10 concentrations is particularly strong for EUR blocks. Finally, we have found that the effect of both synoptic patterns can partly control the interannual variability of winter mean PM10 at many sites of north-western and central Europe, with coefficients of determination (R2) exceeding 0.80 for southern Germany. These results indicate that the response of the