WorldWideScience

Sample records for based sparse approximate

  1. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  2. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  3. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5

  4. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    International Nuclear Information System (INIS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-01-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10

  5. Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery

    Directory of Open Access Journals (Sweden)

    Weifeng Wang

    2014-02-01

    Full Text Available Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0-norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm regularization, and subsequently make the discrete optimization problem continuous and smooth. Then we use the well-known spectral gradient algorithm to solve the resulting smooth optimization problem. Experiments are provided which illustrate this method is very promising.

  6. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  7. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio

    2017-11-16

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  9. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  10. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  11. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  12. Discussion of CoSA: Clustering of Sparse Approximations

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Derek Elswick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-07

    The purpose of this talk is to discuss the possible applications of CoSA (Clustering of Sparse Approximations) to the exploitation of HSI (HyperSpectral Imagery) data. CoSA is presented by Moody et al. in the Journal of Applied Remote Sensing (“Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries”, Vol. 8, 2014) and is based on machine learning techniques.

  13. A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems

    Czech Academy of Sciences Publication Activity Database

    Benzi, M.; Tůma, Miroslav

    1998-01-01

    Roč. 19, č. 3 (1998), s. 968-994 ISSN 1064-8275 R&D Projects: GA ČR GA201/93/0067; GA AV ČR IAA230401 Keywords : large sparse systems * interative methods * preconditioning * approximate inverse * sparse linear systems * sparse matrices * incomplete factorizations * conjugate gradient -type methods Subject RIV: BA - General Mathematics Impact factor: 1.378, year: 1998

  14. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  15. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  16. Dentate Gyrus circuitry features improve performance of sparse approximation algorithms.

    Directory of Open Access Journals (Sweden)

    Panagiotis C Petrantonakis

    Full Text Available Memory-related activity in the Dentate Gyrus (DG is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2-4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm "Iterative Soft Thresholding" (IST by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG.

  17. Approximation and spatial regionalization of rainfall erosivity based on sparse data in a mountainous catchment of the Yangtze River in Central China.

    Science.gov (United States)

    Schönbrodt-Stitt, Sarah; Bosch, Anna; Behrens, Thorsten; Hartmann, Heike; Shi, Xuezheng; Scholten, Thomas

    2013-10-01

    In densely populated countries like China, clean water is one of the most challenging issues of prospective politics and environmental planning. Water pollution and eutrophication by excessive input of nitrogen and phosphorous from nonpoint sources is mostly linked to soil erosion from agricultural land. In order to prevent such water pollution by diffuse matter fluxes, knowledge about the extent of soil loss and the spatial distribution of hot spots of soil erosion is essential. In remote areas such as the mountainous regions of the upper and middle reaches of the Yangtze River, rainfall data are scarce. Since rainfall erosivity is one of the key factors in soil erosion modeling, e.g., expressed as R factor in the Revised Universal Soil Loss Equation model, a methodology is needed to spatially determine rainfall erosivity. Our study aims at the approximation and spatial regionalization of rainfall erosivity from sparse data in the large (3,200 km(2)) and strongly mountainous catchment of the Xiangxi River, a first order tributary to the Yangtze River close to the Three Gorges Dam. As data on rainfall were only obtainable in daily records for one climate station in the central part of the catchment and five stations in its surrounding area, we approximated rainfall erosivity as R factors using regression analysis combined with elevation bands derived from a digital elevation model. The mean annual R factor (R a) amounts for approximately 5,222 MJ mm ha(-1) h(-1) a(-1). With increasing altitudes, R a rises up to maximum 7,547 MJ mm ha(-1) h(-1) a(-1) at an altitude of 3,078 m a.s.l. At the outlet of the Xiangxi catchment erosivity is at minimum with approximate R a=1,986 MJ mm ha(-1) h(-1) a(-1). The comparison of our results with R factors from high-resolution measurements at comparable study sites close to the Xiangxi catchment shows good consistance and allows us to calculate grid-based R a as input for a spatially high-resolution and area-specific assessment of

  18. Sparse approximation of currents for statistics on curves and surfaces.

    Science.gov (United States)

    Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas

    2008-01-01

    Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.

  19. Sparse tensor spherical harmonics approximation in radiative transfer

    International Nuclear Information System (INIS)

    Grella, K.; Schwab, Ch.

    2011-01-01

    The stationary monochromatic radiative transfer equation is a partial differential transport equation stated on a five-dimensional phase space. To obtain a well-posed problem, boundary conditions have to be prescribed on the inflow part of the domain boundary. We solve the equation with a multi-level Galerkin FEM in physical space and a spectral discretization with harmonics in solid angle and show that the benefits of the concept of sparse tensor products, known from the context of sparse grids, can also be leveraged in combination with a spectral discretization. Our method allows us to include high spectral orders without incurring the 'curse of dimension' of a five-dimensional computational domain. Neglecting boundary conditions, we find analytically that for smooth solutions, the convergence rate of the full tensor product method is retained in our method up to a logarithmic factor, while the number of degrees of freedom grows essentially only as fast as for the purely spatial problem. For the case with boundary conditions, we propose a splitting of the physical function space and a conforming tensorization. Numerical experiments in two physical and one angular dimension show evidence for the theoretical convergence rates to hold in the latter case as well.

  20. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  1. Sparse and smooth canonical correlation analysis through rank-1 matrix approximation

    Science.gov (United States)

    Aïssa-El-Bey, Abdeldjalil; Seghouane, Abd-Krim

    2017-12-01

    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far, the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper, two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank-1 matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results, it can be observed that they outperform the state of the art sparse CCA algorithms.

  2. An Adaptive Multilevel Factorized Sparse Approximate Inverse Preconditioning

    Czech Academy of Sciences Publication Activity Database

    Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav

    2017-01-01

    Roč. 113, November (2017), s. 19-24 ISSN 0965-9978 R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : approximate inverse * Gram–Schmidt orthogonalization * incomplete factorization * multilevel methods * preconditioned conjugate gradient method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016

  3. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  4. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    Science.gov (United States)

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  5. Feature based omnidirectional sparse visual path following

    OpenAIRE

    Goedemé, Toon; Tuytelaars, Tinne; Van Gool, Luc; Vanacker, Gerolf; Nuttin, Marnix

    2005-01-01

    Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Feature based omnidirectional sparse visual path following'', Proceedings IEEE/RSJ international conference on intelligent robots and systems - IROS2005, pp. 1003-1008, August 2-6, 2005, Edmonton, Alberta, Canada.

  6. Two alternate proofs of Wang's lune formula for sparse distributed memory and an integral approximation

    Science.gov (United States)

    Jaeckel, Louis A.

    1988-01-01

    In Kanerva's Sparse Distributed Memory, writing to and reading from the memory are done in relation to spheres in an n-dimensional binary vector space. Thus it is important to know how many points are in the intersection of two spheres in this space. Two proofs are given of Wang's formula for spheres of unequal radii, and an integral approximation for the intersection in this case.

  7. Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2012-11-29

    The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large. This paper considers a model class of second order, linear, parametric, elliptic PDEs on a bounded domain D with diffusion coefficients depending on the parameters in an affine manner. For such models, it was shown in [9, 10] that under very weak assumptions on the diffusion coefficients, the entire family of solutions to such equations can be simultaneously approximated in the Hilbert space V = H0 1(D) by multivariate sparse polynomials in the parameter vector y with a controlled number N of terms. The convergence rate in terms of N does not depend on the number of parameters in V, which may be arbitrarily large or countably infinite, thereby breaking the curse of dimensionality. However, these approximation results do not describe the concrete construction of these polynomial expansions, and should therefore rather be viewed as benchmark for the convergence analysis of numerical methods. The present paper presents an adaptive numerical algorithm for constructing a sequence of sparse polynomials that is proved to converge toward the solution with the optimal benchmark rate. Numerical experiments are presented in large parameter dimension, which confirm the effectiveness of the adaptive approach. © 2012 EDP Sciences, SMAI.

  8. Convergence of quasi-optimal sparse-grid approximation of Hilbert-space-valued functions: application to random elliptic PDEs

    KAUST Repository

    Nobile, F.

    2015-10-30

    In this work we provide a convergence analysis for the quasi-optimal version of the sparse-grids stochastic collocation method we presented in a previous work: “On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods” (Beck et al., Math Models Methods Appl Sci 22(09), 2012). The construction of a sparse grid is recast into a knapsack problem: a profit is assigned to each hierarchical surplus and only the most profitable ones are added to the sparse grid. The convergence rate of the sparse grid approximation error with respect to the number of points in the grid is then shown to depend on weighted summability properties of the sequence of profits. This is a very general argument that can be applied to sparse grids built with any uni-variate family of points, both nested and non-nested. As an example, we apply such quasi-optimal sparse grids to the solution of a particular elliptic PDE with stochastic diffusion coefficients, namely the “inclusions problem”: we detail the convergence estimates obtained in this case using polynomial interpolation on either nested (Clenshaw–Curtis) or non-nested (Gauss–Legendre) abscissas, verify their sharpness numerically, and compare the performance of the resulting quasi-optimal grids with a few alternative sparse-grid construction schemes recently proposed in the literature.

  9. Convergence of quasi-optimal sparse-grid approximation of Hilbert-space-valued functions: application to random elliptic PDEs

    KAUST Repository

    Nobile, F.; Tamellini, L.; Tempone, Raul

    2015-01-01

    In this work we provide a convergence analysis for the quasi-optimal version of the sparse-grids stochastic collocation method we presented in a previous work: “On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods” (Beck et al., Math Models Methods Appl Sci 22(09), 2012). The construction of a sparse grid is recast into a knapsack problem: a profit is assigned to each hierarchical surplus and only the most profitable ones are added to the sparse grid. The convergence rate of the sparse grid approximation error with respect to the number of points in the grid is then shown to depend on weighted summability properties of the sequence of profits. This is a very general argument that can be applied to sparse grids built with any uni-variate family of points, both nested and non-nested. As an example, we apply such quasi-optimal sparse grids to the solution of a particular elliptic PDE with stochastic diffusion coefficients, namely the “inclusions problem”: we detail the convergence estimates obtained in this case using polynomial interpolation on either nested (Clenshaw–Curtis) or non-nested (Gauss–Legendre) abscissas, verify their sharpness numerically, and compare the performance of the resulting quasi-optimal grids with a few alternative sparse-grid construction schemes recently proposed in the literature.

  10. A Perceptually Reweighted Mixed-Norm Method for Sparse Approximation of Audio Signals

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll; Sturm, Bob L.

    2011-01-01

    using standard software. A prominent feature of the new method is that it solves a problem that is closely related to the objective of coding, namely rate-distortion optimization. In computer simulations, we demonstrate the properties of the algorithm and its application to real audio signals.......In this paper, we consider the problem of finding sparse representations of audio signals for coding purposes. In doing so, it is of utmost importance that when only a subset of the present components of an audio signal are extracted, it is the perceptually most important ones. To this end, we...... propose a new iterative algorithm based on two principles: 1) a reweighted l1-norm based measure of sparsity; and 2) a reweighted l2-norm based measure of perceptual distortion. Using these measures, the considered problem is posed as a constrained convex optimization problem that can be solved optimally...

  11. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul; Al-Naffouri, Tareq Y.

    2012-01-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical

  12. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  13. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  14. SAR Target Recognition via Local Sparse Representation of Multi-Manifold Regularized Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Meiting Yu

    2018-02-01

    Full Text Available The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs is still a challenging problem, e.g., target with configuration variation, different capture orientations, and articulation. To address these problems, this paper presents a new strategy for target recognition. We first propose a low-dimensional representation model via incorporating multi-manifold regularization term into the low-rank matrix factorization framework. Two rules, pairwise similarity and local linearity, are employed for constructing multiple manifold regularization. By alternately optimizing the matrix factorization and manifold selection, the feature representation model can not only acquire the optimal low-rank approximation of original samples, but also capture the intrinsic manifold structure information. Then, to take full advantage of the local structure property of features and further improve the discriminative ability, local sparse representation is proposed for classification. Finally, extensive experiments on moving and stationary target acquisition and recognition (MSTAR database demonstrate the effectiveness of the proposed strategy, including target recognition under EOCs, as well as the capability of small training size.

  15. Comparison of Clenshaw–Curtis and Leja Quasi-Optimal Sparse Grids for the Approximation of Random PDEs

    KAUST Repository

    Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2015-01-01

    In this work we compare different families of nested quadrature points, i.e. the classic Clenshaw–Curtis and various kinds of Leja points, in the context of the quasi-optimal sparse grid approximation of random elliptic PDEs. Numerical evidence

  16. Comparison of Clenshaw–Curtis and Leja Quasi-Optimal Sparse Grids for the Approximation of Random PDEs

    KAUST Repository

    Nobile, Fabio

    2015-11-26

    In this work we compare different families of nested quadrature points, i.e. the classic Clenshaw–Curtis and various kinds of Leja points, in the context of the quasi-optimal sparse grid approximation of random elliptic PDEs. Numerical evidence suggests that both families perform comparably within such framework.

  17. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul

    2012-12-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. © 1991-2012 IEEE.

  18. Semi-inner-products in Banach Spaces with Applications to Regularized Learning, Sampling, and Sparse Approximation

    Science.gov (United States)

    2016-03-13

    7.00 8.00 Praveen K. Yenduri, Anna C. Gilbert, Jun Zhang. Integrate-and-fire neuron modeled as a low-rate sparse time-encoding device, 2012 Third...International Conference on Intelligent Control and Information Processing (ICICIP). 15-JUL- 12, Dalian, China. : , Praveen K. Yenduri, Anna C. Gilbert

  19. Sparse Generalized Fourier Series via Collocation-based Optimization

    Science.gov (United States)

    2014-11-01

    Theory 51, 12 (2005) 4203– 4215. [6] P. CONSTANTINE , M. ELDRED AND E. PHIPPS, Sparse pseu- dospectral approximation method. Comput. Methods Appl. Mech...Visition XVI: Algorithms, Techniques, Active Vision , and Materials Handling, 224 (1997). [15] J. SHEN AND L. WANG, Some recent advances on spectral methods

  20. Sparse Representation Based SAR Vehicle Recognition along with Aspect Angle

    Directory of Open Access Journals (Sweden)

    Xiangwei Xing

    2014-01-01

    Full Text Available As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC has attracted much attention in synthetic aperture radar (SAR automatic target recognition (ATR recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA, in which the correlation between the vehicle’s aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle’s aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.

  1. Sparse Covariance Matrix Estimation by DCA-Based Algorithms.

    Science.gov (United States)

    Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham

    2017-11-01

    This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

  2. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  3. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  4. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  5. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    Science.gov (United States)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  6. Sparse data structure design for wavelet-based methods

    Directory of Open Access Journals (Sweden)

    Latu Guillaume

    2011-12-01

    Full Text Available This course gives an introduction to the design of efficient datatypes for adaptive wavelet-based applications. It presents some code fragments and benchmark technics useful to learn about the design of sparse data structures and adaptive algorithms. Material and practical examples are given, and they provide good introduction for anyone involved in the development of adaptive applications. An answer will be given to the question: how to implement and efficiently use the discrete wavelet transform in computer applications? A focus will be made on time-evolution problems, and use of wavelet-based scheme for adaptively solving partial differential equations (PDE. One crucial issue is that the benefits of the adaptive method in term of algorithmic cost reduction can not be wasted by overheads associated to sparse data management.

  7. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  8. Split-Bregman-based sparse-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vandeghinste, Bert; Vandenberghe, Stefaan [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Goossens, Bart; Pizurica, Aleksandra; Philips, Wilfried [Ghent Univ. (Belgium). Image Processing and Interpretation Research Group (IPI); Beenhouwer, Jan de [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Wilrijk (Belgium). The Vision Lab; Staelens, Steven [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Edegem (Belgium). Molecular Imaging Centre Antwerp

    2011-07-01

    Total variation minimization has been extensively researched for image denoising and sparse view reconstruction. These methods show superior denoising performance for simple images with little texture, but result in texture information loss when applied to more complex images. It could thus be beneficial to use other regularizers within medical imaging. We propose a general regularization method, based on a split-Bregman approach. We show results for this framework combined with a total variation denoising operator, in comparison to ASD-POCS. We show that sparse-view reconstruction and noise regularization is possible. This general method will allow us to investigate other regularizers in the context of regularized CT reconstruction, and decrease the acquisition times in {mu}CT. (orig.)

  9. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  10. An Improved Information Hiding Method Based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Minghai Yao

    2015-01-01

    Full Text Available A novel biometric authentication information hiding method based on the sparse representation is proposed for enhancing the security of biometric information transmitted in the network. In order to make good use of abundant information of the cover image, the sparse representation method is adopted to exploit the correlation between the cover and biometric images. Thus, the biometric image is divided into two parts. The first part is the reconstructed image, and the other part is the residual image. The biometric authentication image cannot be restored by any one part. The residual image and sparse representation coefficients are embedded into the cover image. Then, for the sake of causing much less attention of attackers, the visual attention mechanism is employed to select embedding location and embedding sequence of secret information. Finally, the reversible watermarking algorithm based on histogram is utilized for embedding the secret information. For verifying the validity of the algorithm, the PolyU multispectral palmprint and the CASIA iris databases are used as biometric information. The experimental results show that the proposed method exhibits good security, invisibility, and high capacity.

  11. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  12. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  13. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  14. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  15. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  16. 3D ear identification based on sparse representation.

    Directory of Open Access Journals (Sweden)

    Lin Zhang

    Full Text Available Biometrics based personal authentication is an effective way for automatically recognizing, with a high confidence, a person's identity. Recently, 3D ear shape has attracted tremendous interests in research field due to its richness of feature and ease of acquisition. However, the existing ICP (Iterative Closet Point-based 3D ear matching methods prevalent in the literature are not quite efficient to cope with the one-to-many identification case. In this paper, we aim to fill this gap by proposing a novel effective fully automatic 3D ear identification system. We at first propose an accurate and efficient template-based ear detection method. By utilizing such a method, the extracted ear regions are represented in a common canonical coordinate system determined by the ear contour template, which facilitates much the following stages of feature extraction and classification. For each extracted 3D ear, a feature vector is generated as its representation by making use of a PCA-based local feature descriptor. At the stage of classification, we resort to the sparse representation based classification approach, which actually solves an l1-minimization problem. To the best of our knowledge, this is the first work introducing the sparse representation framework into the field of 3D ear identification. Extensive experiments conducted on a benchmark dataset corroborate the effectiveness and efficiency of the proposed approach. The associated Matlab source code and the evaluation results have been made publicly online available at http://sse.tongji.edu.cn/linzhang/ear/srcear/srcear.htm.

  17. Improved image registration by sparse patch-based deformation estimation.

    Science.gov (United States)

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  18. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Zhao Feixiang

    2017-04-01

    Full Text Available Feature extraction is a key step in radar target recognition. The quality of the extracted features determines the performance of target recognition. However, obtaining the deep nature of the data is difficult using the traditional method. The autoencoder can learn features by making use of data and can obtain feature expressions at different levels of data. To eliminate the influence of noise, the method of radar target recognition based on stacked denoising sparse autoencoder is proposed in this paper. This method can extract features directly and efficiently by setting different hidden layers and numbers of iterations. Experimental results show that the proposed method is superior to the K-nearest neighbor method and the traditional stacked autoencoder.

  19. Confidence of model based shape reconstruction from sparse data

    DEFF Research Database (Denmark)

    Baka, N.; de Bruijne, Marleen; Reiber, J. H. C.

    2010-01-01

    Statistical shape models (SSM) are commonly applied for plausible interpolation of missing data in medical imaging. However, when fitting a shape model to sparse information, many solutions may fit the available data. In this paper we derive a constrained SSM to fit noisy sparse input landmarks...

  20. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  1. Alpha Matting with KL-Divergence Based Sparse Sampling.

    Science.gov (United States)

    Karacan, Levent; Erdem, Aykut; Erdem, Erkut

    2017-06-22

    In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.

  2. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  3. A flexible framework for sparse simultaneous component based data integration.

    Science.gov (United States)

    Van Deun, Katrijn; Wilderjans, Tom F; van den Berg, Robert A; Antoniadis, Anestis; Van Mechelen, Iven

    2011-11-15

    High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are exclusively tied to one data platform

  4. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  5. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  6. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  7. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  8. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    Science.gov (United States)

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  9. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    Science.gov (United States)

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  10. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  11. Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression

    Science.gov (United States)

    Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang

    2018-05-01

    Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.

  12. Sparse estimation of model-based diffuse thermal dust emission

    Science.gov (United States)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  13. Parametric Human Body Reconstruction Based on Sparse Key Points.

    Science.gov (United States)

    Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel

    2016-11-01

    We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.

  14. Sparse Parallel MRI Based on Accelerated Operator Splitting Schemes.

    Science.gov (United States)

    Cai, Nian; Xie, Weisi; Su, Zhenghang; Wang, Shanshan; Liang, Dong

    2016-01-01

    Recently, the sparsity which is implicit in MR images has been successfully exploited for fast MR imaging with incomplete acquisitions. In this paper, two novel algorithms are proposed to solve the sparse parallel MR imaging problem, which consists of l 1 regularization and fidelity terms. The two algorithms combine forward-backward operator splitting and Barzilai-Borwein schemes. Theoretically, the presented algorithms overcome the nondifferentiable property in l 1 regularization term. Meanwhile, they are able to treat a general matrix operator that may not be diagonalized by fast Fourier transform and to ensure that a well-conditioned optimization system of equations is simply solved. In addition, we build connections between the proposed algorithms and the state-of-the-art existing methods and prove their convergence with a constant stepsize in Appendix. Numerical results and comparisons with the advanced methods demonstrate the efficiency of proposed algorithms.

  15. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    Science.gov (United States)

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  16. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  17. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  18. Multiple-Factor Based Sparse Urban Travel Time Prediction

    Directory of Open Access Journals (Sweden)

    Xinyan Zhu

    2018-02-01

    Full Text Available The prediction of travel time is challenging given the sparseness of real-time traffic data and the uncertainty of travel, because it is influenced by multiple factors on the congested urban road networks. In our paper, we propose a three-layer neural network from big probe vehicles data incorporating multi-factors to estimate travel time. The procedure includes the following three steps. First, we aggregate data according to the travel time of a single taxi traveling a target link on working days as traffic flows display similar traffic patterns over a weekly cycle. We then extract feature relationships between target and adjacent links at 30 min interval. About 224,830,178 records are extracted from probe vehicles. Second, we design a three-layer artificial neural network model. The number of neurons in input layer is eight, and the number of neurons in output layer is one. Finally, the trained neural network model is used for link travel time prediction. Different factors are included to examine their influence on the link travel time. Our model is verified using historical data from probe vehicles collected from May to July 2014 in Wuhan, China. The results show that we could obtain the link travel time prediction results using the designed artificial neural network model and detect the influence of different factors on link travel time.

  19. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  20. Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen

    2017-08-29

    In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.

  1. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  2. Reliability-Based Topology Optimization Using Stochastic Response Surface Method with Sparse Grid Design

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A mathematical framework is developed which integrates the reliability concept into topology optimization to solve reliability-based topology optimization (RBTO problems under uncertainty. Two typical methodologies have been presented and implemented, including the performance measure approach (PMA and the sequential optimization and reliability assessment (SORA. To enhance the computational efficiency of reliability analysis, stochastic response surface method (SRSM is applied to approximate the true limit state function with respect to the normalized random variables, combined with the reasonable design of experiments generated by sparse grid design, which was proven to be an effective and special discretization technique. The uncertainties such as material property and external loads are considered on three numerical examples: a cantilever beam, a loaded knee structure, and a heat conduction problem. Monte-Carlo simulations are also performed to verify the accuracy of the failure probabilities computed by the proposed approach. Based on the results, it is demonstrated that application of SRSM with SGD can produce an efficient reliability analysis in RBTO which enables a more reliable design than that obtained by DTO. It is also found that, under identical accuracy, SORA is superior to PMA in view of computational efficiency.

  3. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  4. Nonuniform Sparse Data Clustering Cascade Algorithm Based on Dynamic Cumulative Entropy

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.

  5. Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Detian Huang

    2018-01-01

    Full Text Available Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application’s requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high- and low-resolution image training sets are constructed, respectively, by using high-frequency information of the training samples as the characterization, and then the zero-phase component analysis whitening technique is utilized to decorrelate the formed joint training set to reduce its redundancy. Secondly, a constructed sparse regularization term is added to the cost function of the traditional sparse autoencoder to further strengthen the sparseness constraint on the hidden layer. Finally, in the dictionary learning stage, the improved sparse autoencoder is adopted to achieve unsupervised dictionary learning to improve the accuracy and stability of the dictionary. Experimental results validate that the proposed algorithm outperforms the existing algorithms both in terms of the subjective visual perception and the objective evaluation indices, including the peak signal-to-noise ratio and the structural similarity measure.

  6. Multi-Frequency Polarimetric SAR Classification Based on Riemannian Manifold and Simultaneous Sparse Representation

    Directory of Open Access Journals (Sweden)

    Fan Yang

    2015-07-01

    Full Text Available Normally, polarimetric SAR classification is a high-dimensional nonlinear mapping problem. In the realm of pattern recognition, sparse representation is a very efficacious and powerful approach. As classical descriptors of polarimetric SAR, covariance and coherency matrices are Hermitian semidefinite and form a Riemannian manifold. Conventional Euclidean metrics are not suitable for a Riemannian manifold, and hence, normal sparse representation classification cannot be applied to polarimetric SAR directly. This paper proposes a new land cover classification approach for polarimetric SAR. There are two principal novelties in this paper. First, a Stein kernel on a Riemannian manifold instead of Euclidean metrics, combined with sparse representation, is employed for polarimetric SAR land cover classification. This approach is named Stein-sparse representation-based classification (SRC. Second, using simultaneous sparse representation and reasonable assumptions of the correlation of representation among different frequency bands, Stein-SRC is generalized to simultaneous Stein-SRC for multi-frequency polarimetric SAR classification. These classifiers are assessed using polarimetric SAR images from the Airborne Synthetic Aperture Radar (AIRSAR sensor of the Jet Propulsion Laboratory (JPL and the Electromagnetics Institute Synthetic Aperture Radar (EMISAR sensor of the Technical University of Denmark (DTU. Experiments on single-band and multi-band data both show that these approaches acquire more accurate classification results in comparison to many conventional and advanced classifiers.

  7. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    Science.gov (United States)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  8. ℓ0 -based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation

    Science.gov (United States)

    Xu, Xia; Shi, Zhenwei; Pan, Bin

    2018-07-01

    Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.

  9. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    Science.gov (United States)

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  10. Constrained Convolutional Sparse Coding for Parametric Based Reconstruction of Line Drawings

    KAUST Repository

    Shaheen, Sara

    2017-12-25

    Convolutional sparse coding (CSC) plays an essential role in many computer vision applications ranging from image compression to deep learning. In this work, we spot the light on a new application where CSC can effectively serve, namely line drawing analysis. The process of drawing a line drawing can be approximated as the sparse spatial localization of a number of typical basic strokes, which in turn can be cast as a non-standard CSC model that considers the line drawing formation process from parametric curves. These curves are learned to optimize the fit between the model and a specific set of line drawings. Parametric representation of sketches is vital in enabling automatic sketch analysis, synthesis and manipulation. A couple of sketch manipulation examples are demonstrated in this work. Consequently, our novel method is expected to provide a reliable and automatic method for parametric sketch description. Through experiments, we empirically validate the convergence of our method to a feasible solution.

  11. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  12. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  13. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wu Yiquan

    2017-08-01

    Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.

  14. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  15. Comparison of four support-vector based function approximators

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2004-01-01

    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been

  16. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging

    Directory of Open Access Journals (Sweden)

    Xiaodong Zhang

    2016-01-01

    Full Text Available Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L0-norm/L1-norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118 than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610. The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.

  17. A Pansharpening Method Based on HCT and Joint Sparse Model

    Directory of Open Access Journals (Sweden)

    XU Ning

    2016-04-01

    Full Text Available A novel fusion method based on the hyperspherical color transformation (HCT and joint sparsity model is proposed for decreasing the spectral distortion of fused image further. In the method, an intensity component and angles of each band of the multispectral image is obtained by HCT firstly, and then the intensity component is fused with the panchromatic image through wavelet transform and joint sparsity model. In the joint sparsity model, the redundant and complement information of the different images can be efficiently extracted and employed to yield the high quality results. Finally, the fused multi spectral image is obtained by inverse transforms of wavelet and HCT on the new lower frequency image and the angle components, respectively. Experimental results on Pleiades-1 and WorldView-2 satellites indicate that the proposed method achieves remarkable results.

  18. Microstructure Images Restoration of Metallic Materials Based upon KSVD and Smoothing Penalty Sparse Representation Approach.

    Science.gov (United States)

    Li, Qing; Liang, Steven Y

    2018-04-20

    Microstructure images of metallic materials play a significant role in industrial applications. To address image degradation problem of metallic materials, a novel image restoration technique based on K-means singular value decomposition (KSVD) and smoothing penalty sparse representation (SPSR) algorithm is proposed in this work, the microstructure images of aluminum alloy 7075 (AA7075) material are used as examples. To begin with, to reflect the detail structure characteristics of the damaged image, the KSVD dictionary is introduced to substitute the traditional sparse transform basis (TSTB) for sparse representation. Then, due to the image restoration, modeling belongs to a highly underdetermined equation, and traditional sparse reconstruction methods may cause instability and obvious artifacts in the reconstructed images, especially reconstructed image with many smooth regions and the noise level is strong, thus the SPSR (here, q = 0.5) algorithm is designed to reconstruct the damaged image. The results of simulation and two practical cases demonstrate that the proposed method has superior performance compared with some state-of-the-art methods in terms of restoration performance factors and visual quality. Meanwhile, the grain size parameters and grain boundaries of microstructure image are discussed before and after they are restored by proposed method.

  19. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  20. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

    Science.gov (United States)

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-04-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

  1. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    Science.gov (United States)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  2. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  3. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  4. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs

    Directory of Open Access Journals (Sweden)

    Guixia He

    2016-01-01

    Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.

  5. Doubly Nonparametric Sparse Nonnegative Matrix Factorization Based on Dependent Indian Buffet Processes.

    Science.gov (United States)

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Xu, Richard Yi Da; Luo, Xiangfeng

    2018-05-01

    Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.

  6. Machinery vibration signal denoising based on learned dictionary and sparse representation

    International Nuclear Information System (INIS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-01-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation. (paper)

  7. Removing flicker based on sparse color correspondences in old film restoration

    Science.gov (United States)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  8. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    Science.gov (United States)

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  9. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  10. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  11. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  12. Quadrature demodulation based circuit implementation of pulse stream for ultrasonic signal FRI sparse sampling

    International Nuclear Information System (INIS)

    Shoupeng, Song; Zhou, Jiang

    2017-01-01

    Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry. (paper)

  13. Toward a consistent random phase approximation based on the relativistic Hartree approximation

    International Nuclear Information System (INIS)

    Price, C.E.; Rost, E.; Shepard, J.R.; McNeil, J.A.

    1992-01-01

    We examine the random phase approximation (RPA) based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the one-loop level. We emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e,e') quasielastic response. We also study the effect of imposing a three-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin-orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m * /m. Consistency is much more important than the cutoff in the description of low-lying collective levels. The cutoff model also provides excellent agreement with quasielastic (e,e') data

  14. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  16. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  17. Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data

    Directory of Open Access Journals (Sweden)

    Alexander P. Kartun-Giles

    2018-04-01

    Full Text Available A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing and equilibrium (static sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling.

  18. Estimation for sparse vegetation information in desertification region based on Tiangong-1 hyperspectral image.

    Science.gov (United States)

    Wu, Jun-Jun; Gao, Zhi-Hai; Li, Zeng-Yuan; Wang, Hong-Yan; Pang, Yong; Sun, Bin; Li, Chang-Long; Li, Xu-Zhi; Zhang, Jiu-Xing

    2014-03-01

    In order to estimate the sparse vegetation information accurately in desertification region, taking southeast of Sunite Right Banner, Inner Mongolia, as the test site and Tiangong-1 hyperspectral image as the main data, sparse vegetation coverage and biomass were retrieved based on normalized difference vegetation index (NDVI) and soil adjusted vegetation index (SAVI), combined with the field investigation data. Then the advantages and disadvantages between them were compared. Firstly, the correlation between vegetation indexes and vegetation coverage under different bands combination was analyzed, as well as the biomass. Secondly, the best bands combination was determined when the maximum correlation coefficient turned up between vegetation indexes (VI) and vegetation parameters. It showed that the maximum correlation coefficient between vegetation parameters and NDVI could reach as high as 0.7, while that of SAVI could nearly reach 0.8. The center wavelength of red band in the best bands combination for NDVI was 630nm, and that of the near infrared (NIR) band was 910 nm. Whereas, when the center wavelength was 620 and 920 nm respectively, they were the best combination for SAVI. Finally, the linear regression models were established to retrieve vegetation coverage and biomass based on Tiangong-1 VIs. R2 of all models was more than 0.5, while that of the model based on SAVI was higher than that based on NDVI, especially, the R2 of vegetation coverage retrieve model based on SAVI was as high as 0.59. By intersection validation, the standard errors RMSE based on SAVI models were lower than that of the model based on NDVI. The results showed that the abundant spectral information of Tiangong-1 hyperspectral image can reflect the actual vegetaion condition effectively, and SAVI can estimate the sparse vegetation information more accurately than NDVI in desertification region.

  19. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  20. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  1. Quantification of localized vertebral deformities using a sparse wavelet-based shape model.

    Science.gov (United States)

    Zewail, R; Elsafi, A; Durdle, N

    2008-01-01

    Medical experts often examine hundreds of spine x-ray images to determine existence of various pathologies. Common pathologies of interest are anterior osteophites, disc space narrowing, and wedging. By careful inspection of the outline shapes of the vertebral bodies, experts are able to identify and assess vertebral abnormalities with respect to the pathology under investigation. In this paper, we present a novel method for quantification of vertebral deformation using a sparse shape model. Using wavelets and Independent component analysis (ICA), we construct a sparse shape model that benefits from the approximation power of wavelets and the capability of ICA to capture higher order statistics in wavelet space. The new model is able to capture localized pathology-related shape deformations, hence it allows for quantification of vertebral shape variations. We investigate the capability of the model to predict localized pathology related deformations. Next, using support-vector machines, we demonstrate the diagnostic capabilities of the method through the discrimination of anterior osteophites in lumbar vertebrae. Experiments were conducted using a set of 150 contours from digital x-ray images of lumbar spine. Each vertebra is labeled as normal or abnormal. Results reported in this work focus on anterior osteophites as the pathology of interest.

  2. Time-Frequency Distribution of Music based on Sparse Wavelet Packet Representations

    DEFF Research Database (Denmark)

    Endelt, Line Ørtoft

    We introduce a new method for generating time-frequency distributions, which is particularly useful for the analysis of music signals. The method presented here is based on $\\ell1$ sparse representations of music signals in a redundant wavelet packet dictionary. The representations are found using...... the minimization methods basis pursuit and best orthogonal basis. Visualizations of the time-frequency distribution are constructed based on a simplified energy distribution in the wavelet packet decomposition. The time-frequency distributions emphasizes structured musical content, including non-stationary content...

  3. The Real-Valued Sparse Direction of Arrival (DOA Estimation Based on the Khatri-Rao Product

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-05-01

    Full Text Available There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR product called the L1-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV model, and a new virtual overcomplete dictionary is constructed according to the KR product’s property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV. The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm.

  4. APPECT: An Approximate Backbone-Based Clustering Algorithm for Tags

    DEFF Research Database (Denmark)

    Zong, Yu; Xu, Guandong; Jin, Pin

    2011-01-01

    algorithm for Tags (APPECT). The main steps of APPECT are: (1) we execute the K-means algorithm on a tag similarity matrix for M times and collect a set of tag clustering results Z={C1,C2,…,Cm}; (2) we form the approximate backbone of Z by executing a greedy search; (3) we fix the approximate backbone...... as the initial tag clustering result and then assign the rest tags into the corresponding clusters based on the similarity. Experimental results on three real world datasets namely MedWorm, MovieLens and Dmoz demonstrate the effectiveness and the superiority of the proposed method against the traditional...... Agglomerative Clustering on tagging data, which possess the inherent drawbacks, such as the sensitivity of initialization. In this paper, we instead make use of the approximate backbone of tag clustering results to find out better tag clusters. In particular, we propose an APProximate backbonE-based Clustering...

  5. Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object.

    Science.gov (United States)

    Kim, Hak Gu; Man Ro, Yong

    2017-11-27

    In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

  6. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Science.gov (United States)

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  7. 3D Part-Based Sparse Tracker with Automatic Synchronization and Registration

    KAUST Repository

    Bibi, Adel Aamer; Zhang, Tianzhu; Ghanem, Bernard

    2016-01-01

    In this paper, we present a part-based sparse tracker in a particle filter framework where both the motion and appearance model are formulated in 3D. The motion model is adaptive and directed according to a simple yet powerful occlusion handling paradigm, which is intrinsically fused in the motion model. Also, since 3D trackers are sensitive to synchronization and registration noise in the RGB and depth streams, we propose automated methods to solve these two issues. Extensive experiments are conducted on a popular RGBD tracking benchmark, which demonstrate that our tracker can achieve superior results, outperforming many other recent and state-of-the-art RGBD trackers.

  8. 3D Part-Based Sparse Tracker with Automatic Synchronization and Registration

    KAUST Repository

    Bibi, Adel Aamer

    2016-12-13

    In this paper, we present a part-based sparse tracker in a particle filter framework where both the motion and appearance model are formulated in 3D. The motion model is adaptive and directed according to a simple yet powerful occlusion handling paradigm, which is intrinsically fused in the motion model. Also, since 3D trackers are sensitive to synchronization and registration noise in the RGB and depth streams, we propose automated methods to solve these two issues. Extensive experiments are conducted on a popular RGBD tracking benchmark, which demonstrate that our tracker can achieve superior results, outperforming many other recent and state-of-the-art RGBD trackers.

  9. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  10. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    Science.gov (United States)

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  11. Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.

    Science.gov (United States)

    Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L

    2015-09-01

    Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.

  12. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  13. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  14. Facial expression recognition based on weber local descriptor and sparse representation

    Science.gov (United States)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  15. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2014-01-01

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  16. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan

    2014-01-06

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  17. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    Science.gov (United States)

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  18. On the Metric-based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2018-01-01

    In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...

  19. On the Metric-Based Approximate Minimization of Markov Chains

    DEFF Research Database (Denmark)

    Bacci, Giovanni; Bacci, Giorgio; Larsen, Kim Guldstrand

    2017-01-01

    We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...

  20. An approximation method for diffusion based leaching models

    International Nuclear Information System (INIS)

    Shukla, B.S.; Dignam, M.J.

    1987-01-01

    In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)

  1. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    Science.gov (United States)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  2. Optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface.

    Science.gov (United States)

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-11-30

    Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  4. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  5. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  6. Sparse grid-based polynomial chaos expansion for aerodynamics of an airfoil with uncertainties

    Directory of Open Access Journals (Sweden)

    Xiaojing WU

    2018-05-01

    Full Text Available The uncertainties can generate fluctuations with aerodynamic characteristics. Uncertainty Quantification (UQ is applied to compute its impact on the aerodynamic characteristics. In addition, the contribution of each uncertainty to aerodynamic characteristics should be computed by uncertainty sensitivity analysis. Non-Intrusive Polynomial Chaos (NIPC has been successfully applied to uncertainty quantification and uncertainty sensitivity analysis. However, the non-intrusive polynomial chaos method becomes inefficient as the number of random variables adopted to describe uncertainties increases. This deficiency becomes significant in stochastic aerodynamic analysis considering the geometric uncertainty because the description of geometric uncertainty generally needs many parameters. To solve the deficiency, a Sparse Grid-based Polynomial Chaos (SGPC expansion is used to do uncertainty quantification and sensitivity analysis for stochastic aerodynamic analysis considering geometric and operational uncertainties. It is proved that the method is more efficient than non-intrusive polynomial chaos and Monte Carlo Simulation (MSC method for the stochastic aerodynamic analysis. By uncertainty quantification, it can be learnt that the flow characteristics of shock wave and boundary layer separation are sensitive to the geometric uncertainty in transonic region. The uncertainty sensitivity analysis reveals the individual and coupled effects among the uncertainty parameters. Keywords: Non-intrusive polynomial chaos, Sparse grid, Stochastic aerodynamic analysis, Uncertainty sensitivity analysis, Uncertainty quantification

  7. A Multicast Sparse-Grooming Algorithm Based on Network Coding in WDM Networks

    Science.gov (United States)

    Zhang, Shengfeng; Peng, Han; Sui, Meng; Liu, Huanlin

    2015-03-01

    To improve the limited number of wavelength utilization and decrease the traffic blocking probability in sparse-grooming wavelength-division multiplexing (WDM) networks, a multicast sparse-grooming algorithm based on network coding (MCSA-NC) is put forward to solve the routing problem for dynamic multicast requests in this paper. In the proposed algorithm, a traffic partition strategy, that the coarse-granularity multicast request with grooming capability on the source node is split into several fine-granularity multicast requests, is designed so as to increase the probability for traffic grooming successfully in MCSA-NC. Besides considering that multiple destinations should receive the data from source of the multicast request at the same time, the traditional transmission mechanism is improved by constructing edge-disjoint paths for each split multicast request. Moreover, in order to reduce the number of wavelengths required and further decrease the traffic blocking probability, a light-tree reconfiguration mechanism is presented in the MCSA-NC, which can select a minimal cost light tree from the established edge-disjoint paths for a new multicast request.

  8. Predictions of first passage times in sparse discrete fracture networks using graph-based reductions

    Science.gov (United States)

    Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.

    2017-12-01

    We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.

  9. Intensity-based hierarchical elastic registration using approximating splines.

    Science.gov (United States)

    Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C

    2014-01-01

    We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS

  10. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    Science.gov (United States)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  11. Face recognition based on two-dimensional discriminant sparse preserving projection

    Science.gov (United States)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  12. Sparse Representation Based Range-Doppler Processing for Integrated OFDM Radar-Communication Networks

    Directory of Open Access Journals (Sweden)

    Bo Kong

    2017-01-01

    Full Text Available In an integrated radar-communication network, multiuser access techniques with minimal performance degradation and without range-Doppler ambiguities are required, especially in a dense user environment. In this paper, a multiuser access scheme with random subcarrier allocation mechanism is proposed for orthogonal frequency division multiplexing (OFDM based integrated radar-communication networks. The expression of modulation Symbol-Domain method combined with sparse representation (SR for range-Doppler estimation is introduced and a parallel reconstruction algorithm is employed. The radar target detection performance is improved with less spectrum occupation. Additionally, a Doppler frequency detector is exploited to decrease the computational complexity. Numerical simulations show that the proposed method outperforms the traditional modulation Symbol-Domain method under ideal and realistic nonideal scenarios.

  13. Structural Sparse Tracking

    KAUST Repository

    Zhang, Tianzhu

    2015-06-01

    Sparse representation has been applied to visual tracking by finding the best target candidate with minimal reconstruction error by use of target templates. However, most sparse representation based trackers only consider holistic or local representations and do not make full use of the intrinsic structure among and inside target candidates, thereby making the representation less effective when similar objects appear or under occlusion. In this paper, we propose a novel Structural Sparse Tracking (SST) algorithm, which not only exploits the intrinsic relationship among target candidates and their local patches to learn their sparse representations jointly, but also preserves the spatial layout structure among the local patches inside each target candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs favorably against several state-of-the-art methods.

  14. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  15. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Science.gov (United States)

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  16. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    International Nuclear Information System (INIS)

    Ma Xiang; Zabaras, Nicholas

    2009-01-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media

  17. Label inspection of approximate cylinder based on adverse cylinder panorama

    Science.gov (United States)

    Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo

    2013-12-01

    This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.

  18. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-23

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack

    2016-01-01

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  1. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan; Scavino, Marco; Tempone, Raul; Wang, Suojin

    2013-01-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  2. Inverter-based successive approximation capacitance-to-digital converter

    KAUST Repository

    Omran, Hesham

    2017-03-23

    An energy-efficient capacitance-to-digital converter (CDC) is provided that utilizes a capacitance-domain successive approximation (SAR) technique. Unlike SAR analog- to-digital converters (ADCs), analysis shows that for SAR CDCs, the comparator offset voltage will result in signal-dependent and parasitic-dependent conversion errors, which necessitates an op-amp-based implementation. The inverter-based SAR CDC contemplated herein provides robust, energy-efficient, and fast operation. The inverter- based SAR CDC may include a hybrid coarse-fine programmable capacitor array. The design of example embodiments is insensitive to analog references, and thus achieves very low temperature sensitivity without the need for calibration. Moreover, this design achieves improved energy efficiency.

  3. Sparse and Dispersion-Based Matching Pursuit for Minimizing the Dispersion Effect Occurring when Using Guided Wave for Pipe Inspection.

    Science.gov (United States)

    Rostami, Javad; Tse, Peter W T; Fang, Zhou

    2017-06-06

    Ultrasonic guided wave is an effective tool for structural health monitoring of structures for detecting defects. In practice, guided wave signals are dispersive and contain multiple modes and noise. In the presence of overlapped wave-packets/modes and noise together with dispersion, extracting meaningful information from these signals is a challenging task. Handling such challenge requires an advanced signal processing tool. The aim of this study is to develop an effective and robust signal processing tool to deal with the complexity of guided wave signals for non-destructive testing (NDT) purpose. To achieve this goal, Sparse Representation with Dispersion Based Matching Pursuit (SDMP) is proposed. Addressing the three abovementioned facts that complicate signal interpretation, SDMP separates overlapped modes and demonstrates good performance against noise with maximum sparsity. With the dispersion taken into account, an overc-omplete and redundant dictionary of basic atoms based on a narrowband excitation signal is designed. As Finite Element Method (FEM) was used to predict the form of wave packets propagating along structures, these atoms have the maximum resemblance with real guided wave signals. SDMP operates in two stages. In the first stage, similar to Matching Pursuit (MP), the approximation improves by adding, a single atom to the solution set at each iteration. However, atom selection criterion of SDMP utilizes the time localization of guided wave reflections that makes a portion of overlapped wave-packets to be composed mainly of a single echo. In the second stage of the algorithm, the selected atoms that have frequency inconsistency with the excitation signal are discarded. This increases the sparsity of the final representation. Meanwhile, leading to accurate approximation, as discarded atoms are not representing guided wave reflections, it simplifies extracting physical meanings for defect detection purpose. To verify the effectiveness of SDMP for

  4. Sparse and Dispersion-Based Matching Pursuit for Minimizing the Dispersion Effect Occurring when Using Guided Wave for Pipe Inspection

    Directory of Open Access Journals (Sweden)

    Javad Rostami

    2017-06-01

    Full Text Available Ultrasonic guided wave is an effective tool for structural health monitoring of structures for detecting defects. In practice, guided wave signals are dispersive and contain multiple modes and noise. In the presence of overlapped wave-packets/modes and noise together with dispersion, extracting meaningful information from these signals is a challenging task. Handling such challenge requires an advanced signal processing tool. The aim of this study is to develop an effective and robust signal processing tool to deal with the complexity of guided wave signals for non-destructive testing (NDT purpose. To achieve this goal, Sparse Representation with Dispersion Based Matching Pursuit (SDMP is proposed. Addressing the three abovementioned facts that complicate signal interpretation, SDMP separates overlapped modes and demonstrates good performance against noise with maximum sparsity. With the dispersion taken into account, an overc-omplete and redundant dictionary of basic atoms based on a narrowband excitation signal is designed. As Finite Element Method (FEM was used to predict the form of wave packets propagating along structures, these atoms have the maximum resemblance with real guided wave signals. SDMP operates in two stages. In the first stage, similar to Matching Pursuit (MP, the approximation improves by adding, a single atom to the solution set at each iteration. However, atom selection criterion of SDMP utilizes the time localization of guided wave reflections that makes a portion of overlapped wave-packets to be composed mainly of a single echo. In the second stage of the algorithm, the selected atoms that have frequency inconsistency with the excitation signal are discarded. This increases the sparsity of the final representation. Meanwhile, leading to accurate approximation, as discarded atoms are not representing guided wave reflections, it simplifies extracting physical meanings for defect detection purpose. To verify the

  5. A Stokes drift approximation based on the Phillips spectrum

    Science.gov (United States)

    Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2016-04-01

    A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.

  6. Programming for Sparse Minimax Optimization

    DEFF Research Database (Denmark)

    Jonasson, K.; Madsen, Kaj

    1994-01-01

    We present an algorithm for nonlinear minimax optimization which is well suited for large and sparse problems. The method is based on trust regions and sequential linear programming. On each iteration, a linear minimax problem is solved for a basic step. If necessary, this is followed...... by the determination of a minimum norm corrective step based on a first-order Taylor approximation. No Hessian information needs to be stored. Global convergence is proved. This new method has been extensively tested and compared with other methods, including two well known codes for nonlinear programming...

  7. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  8. Direction-of-Arrival Estimation Based on Sparse Recovery with Second-Order Statistics

    Directory of Open Access Journals (Sweden)

    H. Chen

    2015-04-01

    Full Text Available Traditional direction-of-arrival (DOA estimation techniques perform Nyquist-rate sampling of the received signals and as a result they require high storage. To reduce sampling ratio, we introduce level-crossing (LC sampling which captures samples whenever the signal crosses predetermined reference levels, and the LC-based analog-to-digital converter (LC ADC has been shown to efficiently sample certain classes of signals. In this paper, we focus on the DOA estimation problem by using second-order statistics based on the LC samplings recording on one sensor, along with the synchronous samplings of the another sensors, a sparse angle space scenario can be found by solving an $ell_1$ minimization problem, giving the number of sources and their DOA's. The experimental results show that our proposed method, when compared with some existing norm-based constrained optimization compressive sensing (CS algorithms, as well as subspace method, improves the DOA estimation performance, while using less samples when compared with Nyquist-rate sampling and reducing sensor activity especially for long time silence signal.

  9. Speckle Reduction on Ultrasound Liver Images Based on a Sparse Representation over a Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Mohamed Yaseen Jabarulla

    2018-05-01

    Full Text Available Ultrasound images are corrupted with multiplicative noise known as speckle, which reduces the effectiveness of image processing and hampers interpretation. This paper proposes a multiplicative speckle suppression technique for ultrasound liver images, based on a new signal reconstruction model known as sparse representation (SR over dictionary learning. In the proposed technique, the non-uniform multiplicative signal is first converted into additive noise using an enhanced homomorphic filter. This is followed by pixel-based total variation (TV regularization and patch-based SR over a dictionary trained using K-singular value decomposition (KSVD. Finally, the split Bregman algorithm is used to solve the optimization problem and estimate the de-speckled image. The simulations performed on both synthetic and clinical ultrasound images for speckle reduction, the proposed technique achieved peak signal-to-noise ratios of 35.537 dB for the dictionary trained on noisy image patches and 35.033 dB for the dictionary trained using a set of reference ultrasound image patches. Further, the evaluation results show that the proposed method performs better than other state-of-the-art denoising algorithms in terms of both peak signal-to-noise ratio and subjective visual quality assessment.

  10. Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Chow, Edmond; Pothen, Alex

    2005-03-18

    This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.

  11. Online sparse Gaussian process based human motion intent learning for an electrically actuated lower extremity exoskeleton.

    Science.gov (United States)

    Long, Yi; Du, Zhi-Jiang; Chen, Chao-Feng; Dong, Wei; Wang, Wei-Dong

    2017-07-01

    The most important step for lower extremity exoskeleton is to infer human motion intent (HMI), which contributes to achieve human exoskeleton collaboration. Since the user is in the control loop, the relationship between human robot interaction (HRI) information and HMI is nonlinear and complicated, which is difficult to be modeled by using mathematical approaches. The nonlinear approximation can be learned by using machine learning approaches. Gaussian Process (GP) regression is suitable for high-dimensional and small-sample nonlinear regression problems. GP regression is restrictive for large data sets due to its computation complexity. In this paper, an online sparse GP algorithm is constructed to learn the HMI. The original training dataset is collected when the user wears the exoskeleton system with friction compensation to perform unconstrained movement as far as possible. The dataset has two kinds of data, i.e., (1) physical HRI, which is collected by torque sensors placed at the interaction cuffs for the active joints, i.e., knee joints; (2) joint angular position, which is measured by optical position sensors. To reduce the computation complexity of GP, grey relational analysis (GRA) is utilized to specify the original dataset and provide the final training dataset. Those hyper-parameters are optimized offline by maximizing marginal likelihood and will be applied into online GP regression algorithm. The HMI, i.e., angular position of human joints, will be regarded as the reference trajectory for the mechanical legs. To verify the effectiveness of the proposed algorithm, experiments are performed on a subject at a natural speed. The experimental results show the HMI can be obtained in real time, which can be extended and employed in the similar exoskeleton systems.

  12. Sparse direct solver for large finite element problems based on the minimum degree algorithm

    Czech Academy of Sciences Publication Activity Database

    Pařík, Petr; Plešek, Jiří

    2017-01-01

    Roč. 113, November (2017), s. 2-6 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GA15-20666S; GA MŠk(CZ) EF15_003/0000493 Institutional support: RVO:61388998 Keywords : sparse direct solution * finite element method * large sparse Linear systems Subject RIV: JR - Other Machinery OBOR OECD: Mechanical engineering Impact factor: 3.000, year: 2016 https://www.sciencedirect.com/science/article/pii/S0965997817302582

  13. MO-FG-204-08: Optimization-Based Image Reconstruction From Unevenly Distributed Sparse Projection Views

    International Nuclear Information System (INIS)

    Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi

    2015-01-01

    Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality

  14. Analog system for computing sparse codes

    Science.gov (United States)

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  15. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  16. Detecting Activation in fMRI Data: An Approach Based on Sparse Representation of BOLD Signal

    Directory of Open Access Journals (Sweden)

    Blanca Guillen

    2018-01-01

    Full Text Available This paper proposes a simple yet effective approach for detecting activated voxels in fMRI data by exploiting the inherent sparsity property of the BOLD signal in temporal and spatial domains. In the time domain, the approach combines the General Linear Model (GLM with a Least Absolute Deviation (LAD based regression method regularized by the pseudonorm l0 to promote sparsity in the parameter vector of the model. In the spatial domain, detection of activated regions is based on thresholding the spatial map of estimated parameters associated with a particular stimulus. The threshold is calculated by exploiting the sparseness of the BOLD signal in the spatial domain assuming a Laplacian distribution model. The proposed approach is validated using synthetic and real fMRI data. For synthetic data, results show that the proposed approach is able to detect most activated voxels without any false activation. For real data, the method is evaluated through comparison with the SPM software. Results indicate that this approach can effectively find activated regions that are similar to those found by SPM, but using a much simpler approach. This study may lead to the development of robust spatial approaches to further simplifying the complexity of classical schemes.

  17. SEE rate estimation based on diffusion approximation of charge collection

    Science.gov (United States)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  18. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  19. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  20. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  1. Prediction of Protein-Protein Interaction By Metasample-Based Sparse Representation

    Directory of Open Access Journals (Sweden)

    Xiuquan Du

    2015-01-01

    Full Text Available Protein-protein interactions (PPIs play key roles in many cellular processes such as transcription regulation, cell metabolism, and endocrine function. Understanding these interactions takes a great promotion to the pathogenesis and treatment of various diseases. A large amount of data has been generated by experimental techniques; however, most of these data are usually incomplete or noisy, and the current biological experimental techniques are always very time-consuming and expensive. In this paper, we proposed a novel method (metasample-based sparse representation classification, MSRC for PPIs prediction. A group of metasamples are extracted from the original training samples and then use the l1-regularized least square method to express a new testing sample as the linear combination of these metasamples. PPIs prediction is achieved by using a discrimination function defined in the representation coefficients. The MSRC is applied to PPIs dataset; it achieves 84.9% sensitivity, and 94.55% specificity, which is slightly lower than support vector machine (SVM and much higher than naive Bayes (NB, neural networks (NN, and k-nearest neighbor (KNN. The result shows that the MSRC is efficient for PPIs prediction.

  2. A sparse autoencoder-based deep neural network for protein solvent accessibility and contact number prediction.

    Science.gov (United States)

    Deng, Lei; Fan, Chao; Zeng, Zhiwen

    2017-12-28

    Direct prediction of the three-dimensional (3D) structures of proteins from one-dimensional (1D) sequences is a challenging problem. Significant structural characteristics such as solvent accessibility and contact number are essential for deriving restrains in modeling protein folding and protein 3D structure. Thus, accurately predicting these features is a critical step for 3D protein structure building. In this study, we present DeepSacon, a computational method that can effectively predict protein solvent accessibility and contact number by using a deep neural network, which is built based on stacked autoencoder and a dropout method. The results demonstrate that our proposed DeepSacon achieves a significant improvement in the prediction quality compared with the state-of-the-art methods. We obtain 0.70 three-state accuracy for solvent accessibility, 0.33 15-state accuracy and 0.74 Pearson Correlation Coefficient (PCC) for the contact number on the 5729 monomeric soluble globular protein dataset. We also evaluate the performance on the CASP11 benchmark dataset, DeepSacon achieves 0.68 three-state accuracy and 0.69 PCC for solvent accessibility and contact number, respectively. We have shown that DeepSacon can reliably predict solvent accessibility and contact number with stacked sparse autoencoder and a dropout approach.

  3. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Science.gov (United States)

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  4. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Directory of Open Access Journals (Sweden)

    Xiangbin Zhu

    Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  5. Seismic detection method for small-scale discontinuities based on dictionary learning and sparse representation

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei

    2017-02-01

    Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.

  6. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  7. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  8. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Index finger motor imagery EEG pattern recognition in BCI applications using dictionary cleaned sparse representation-based classification for healthy people

    Science.gov (United States)

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Fengkui; Liu, Feixiang

    2017-09-01

    Electroencephalogram (EEG)-based motor imagery (MI) brain-computer interface (BCI) has shown its effectiveness for the control of rehabilitation devices designed for large body parts of the patients with neurologic impairments. In order to validate the feasibility of using EEG to decode the MI of a single index finger and constructing a BCI-enhanced finger rehabilitation system, we collected EEG data during right hand index finger MI and rest state for five healthy subjects and proposed a pattern recognition approach for classifying these two mental states. First, Fisher's linear discriminant criteria and power spectral density analysis were used to analyze the event-related desynchronization patterns. Second, both band power and approximate entropy were extracted as features. Third, aiming to eliminate the abnormal samples in the dictionary and improve the classification performance of the conventional sparse representation-based classification (SRC) method, we proposed a novel dictionary cleaned sparse representation-based classification (DCSRC) method for final classification. The experimental results show that the proposed DCSRC method gives better classification accuracies than SRC and an average classification accuracy of 81.32% is obtained for five subjects. Thus, it is demonstrated that single right hand index finger MI can be decoded from the sensorimotor rhythms, and the feature patterns of index finger MI and rest state can be well recognized for robotic exoskeleton initiation.

  10. Approximate labeling via graph cuts based on linear programming.

    Science.gov (United States)

    Komodakis, Nikos; Tziritas, Georgios

    2007-08-01

    A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.

  11. Approximate Dynamic Programming Based on High Dimensional Model Representation

    Czech Academy of Sciences Publication Activity Database

    Pištěk, Miroslav

    2013-01-01

    Roč. 49, č. 5 (2013), s. 720-737 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GAP102/11/0437 Institutional support: RVO:67985556 Keywords : approximate dynamic programming * Bellman equation * approximate HDMR minimization * trust region problem Subject RIV: BC - Control Systems Theory Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/pistek-0399560.pdf

  12. Analysis Sparse Representation for Nonnegative Signals Based on Determinant Measure by DC Programming

    Directory of Open Access Journals (Sweden)

    Yujie Li

    2018-01-01

    Full Text Available Analysis sparse representation has recently emerged as an alternative approach to the synthesis sparse model. Most existing algorithms typically employ the l0-norm, which is generally NP-hard. Other existing algorithms employ the l1-norm to relax the l0-norm, which sometimes cannot promote adequate sparsity. Most of these existing algorithms focus on general signals and are not suitable for nonnegative signals. However, many signals are necessarily nonnegative such as spectral data. In this paper, we present a novel and efficient analysis dictionary learning algorithm for nonnegative signals with the determinant-type sparsity measure which is convex and differentiable. The analysis sparse representation can be cast in three subproblems, sparse coding, dictionary update, and signal update, because the determinant-type sparsity measure would result in a complex nonconvex optimization problem, which cannot be easily solved by standard convex optimization methods. Therefore, in the proposed algorithms, we use a difference of convex (DC programming scheme for solving the nonconvex problem. According to our theoretical analysis and simulation study, the main advantage of the proposed algorithm is its greater dictionary learning efficiency, particularly compared with state-of-the-art algorithms. In addition, our proposed algorithm performs well in image denoising.

  13. Single-Trial Evoked Potential Estimating Based on Sparse Coding under Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Nannan Yu

    2018-01-01

    Full Text Available Estimating single-trial evoked potentials (EPs corrupted by the spontaneous electroencephalogram (EEG can be regarded as signal denoising problem. Sparse coding has significant success in signal denoising and EPs have been proven to have strong sparsity over an appropriate dictionary. In sparse coding, the noise generally is considered to be a Gaussian random process. However, some studies have shown that the background noise in EPs may present an impulsive characteristic which is far from Gaussian but suitable to be modeled by the α-stable distribution 1<α≤2. Consequently, the performances of general sparse coding will degrade or even fail. In view of this, we present a new sparse coding algorithm using p-norm optimization in single-trial EPs estimating. The algorithm can track the underlying EPs corrupted by α-stable distribution noise, trial-by-trial, without the need to estimate the α value. Simulations and experiments on human visual evoked potentials and event-related potentials are carried out to examine the performance of the proposed approach. Experimental results show that the proposed method is effective in estimating single-trial EPs under impulsive noise environment.

  14. Characterizing spatiotemporal information loss in sparse-sampling-based dynamic MRI for monitoring respiration-induced tumor motion in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Arai, Tatsuya J. [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Nofiele, Joris; Yuan, Qing [Department of Radiology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Madhuranthakam, Ananth J.; Pedrosa, Ivan; Chopra, Rajiv [Department of Radiology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Advanced Imaging Research Center, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Sawant, Amit, E-mail: amit.sawant@utsouthwestern.edu [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Department of Radiology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland, 21201 (United States)

    2016-06-15

    Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0

  15. Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification

    KAUST Repository

    Winokur, J.; Kim, D.; Bisetti, Fabrizio; Le Maî tre, O. P.; Knio, Omar

    2015-01-01

    We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a

  16. Airship Sparse Array Antenna Radar Real Aperture Imaging Based on Compressed Sensing and Sparsity in Transform Domain

    Directory of Open Access Journals (Sweden)

    Li Liechen

    2016-02-01

    Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.

  17. Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination

    Science.gov (United States)

    2010-02-01

    Redding, Mike Robbins, Frant Sobolic, Justin Teo, Tuna Toksoz, Glenn Tournier, Aditya Undurti, Mario Valenti, Andy Whitten, Albert Wu, and Rodrigo...vector algorithms. Neural Computation, 12(5):1207–1245, 2000. [143] P. Schweitzer and A. Seidman. Generalized polynomial approximation in Markovian

  18. Synthesis Of Ultrasound Field Sources Based on Phase Screen Approximation

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2018-01-01

    Full Text Available Here is proposed the method for synthesizing the sources of an acoustic field on the basis of an approximation of the phase screen. The technology of manufacturing ultrasonic phased arrays providing the formation of a field of a given distribution is proposed. An experimental setup has been developed for the formation of a vortex field at a distance of 10 cm.

  19. Sparse Adaptive Channel Estimation Based on lp-Norm-Penalized Affine Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Yingsong Li

    2014-01-01

    Full Text Available We propose an lp-norm-penalized affine projection algorithm (LP-APA for broadband multipath adaptive channel estimations. The proposed LP-APA is realized by incorporating an lp-norm into the cost function of the conventional affine projection algorithm (APA to exploit the sparsity property of the broadband wireless multipath channel, by which the convergence speed and steady-state performance of the APA are significantly improved. The implementation of the LP-APA is equivalent to adding a zero attractor to its iterations. The simulation results, which are obtained from a sparse channel estimation, demonstrate that the proposed LP-APA can efficiently improve channel estimation performance in terms of both the convergence speed and steady-state performance when the channel is exactly sparse.

  20. Dictionaries for Sparse Neural Network Approximation

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    submitted 27.12.2017 (2018) ISSN 2162-237X R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : measures of sparsity * fee dforward networks * binary classification * dictionaries of computational units * Chernoff-Hoeffding Bound Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 6.108, year: 2016

  1. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  2. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    Science.gov (United States)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  3. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  4. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    International Nuclear Information System (INIS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Liang, Jimin; Tian, Jie; Cao, Feng

    2014-01-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance. (paper)

  5. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.

  6. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  7. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  8. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  9. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  10. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  11. Inverter-based successive approximation capacitance-to-digital converter

    KAUST Repository

    Omran, Hesham; Alhoshany, Abdulaziz; Salama, Khaled N.

    2017-01-01

    offset voltage will result in signal-dependent and parasitic-dependent conversion errors, which necessitates an op-amp-based implementation. The inverter-based SAR CDC contemplated herein provides robust, energy-efficient, and fast operation. The inverter

  12. Evaluating Environmental Impact of Traffic Congestion in Real Time Based on Sparse Mobile Crowd-sourced Data

    Science.gov (United States)

    2018-02-02

    Traffic congestion at arterial intersections and freeway bottlenecks degrades the air quality and threatens the public health. Conventionally, air pollutants are monitored by sparsely-distributed Quality Assurance Air Monitoring Sites. Sparse mobile ...

  13. Task-based data-acquisition optimization for sparse image reconstruction systems

    Science.gov (United States)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  14. MIMO-OFDM Chirp Waveform Diversity Design and Implementation Based on Sparse Matrix and Correlation Optimization

    Directory of Open Access Journals (Sweden)

    Wang Wen-qin

    2015-02-01

    Full Text Available The waveforms used in Multiple-Input Multiple-Output (MIMO Synthetic Aperture Radar (SAR should have a large time-bandwidth product and good ambiguity function performance. A scheme to design multiple orthogonal MIMO SAR Orthogonal Frequency Division Multiplexing (OFDM chirp waveforms by combinational sparse matrix and correlation optimization is proposed. First, the problem of MIMO SAR waveform design amounts to the associated design of hopping frequency and amplitudes. Then a iterative exhaustive search algorithm is adopted to optimally design the code matrix with the constraints minimizing the block correlation coefficient of sparse matrix and the sum of cross-correlation peaks. And the amplitudes matrix are adaptively designed by minimizing the cross-correlation peaks with the genetic algorithm. Additionally, the impacts of waveform number, hopping frequency interval and selectable frequency index are also analyzed. The simulation results verify the proposed scheme can design multiple orthogonal large time-bandwidth product OFDM chirp waveforms with low cross-correlation peak and sidelobes and it improves ambiguity performance.

  15. Bubble parameters analysis of gas-liquid two-phase sparse bubbly flow based on image method

    International Nuclear Information System (INIS)

    Zhou Yunlong; Zhou Hongjuan; Song Lianzhuang; Liu Qian

    2012-01-01

    The sparse rising bubbles of gas-liquid two-phase flow in vertical pipe were measured and studied based on image method. The bubble images were acquired by high-speed video camera systems, the characteristic parameters of bubbles were extracted by using image processing techniques. Then velocity variation of rising bubbles were drawn. Area and centroid variation of single bubble were also drawn. And then parameters and movement law of bubbles were analyzed and studied. The test results showed that parameters of bubbles had been analyzed well by using image method. (authors)

  16. Sparse dynamics for partial differential equations.

    Science.gov (United States)

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley

    2013-04-23

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.

  17. Bernstein approximations in glasso-based estimation of biological networks

    NARCIS (Netherlands)

    Purutcuoglu, Vilda; Agraz, Melih; Wit, Ernst

    The Gaussian graphical model (GGM) is one of the common dynamic modelling approaches in the construction of gene networks. In inference of this modelling the interaction between genes can be detected mainly via graphical lasso (glasso) or coordinate descent-based approaches. Although these methods

  18. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Qing Ye

    2015-01-01

    Full Text Available This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach.

  19. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    International Nuclear Information System (INIS)

    Jones, Ryan M; O’Reilly, Meaghan A; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood–brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist. (paper)

  20. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    Science.gov (United States)

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  1. Visual tracking based on the sparse representation of the PCA subspace

    Science.gov (United States)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  2. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  3. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    Directory of Open Access Journals (Sweden)

    Haruo Hosoya

    2017-07-01

    Full Text Available Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009. These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance, and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  4. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    Science.gov (United States)

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  6. Computing the sparse matrix vector product using block-based kernels without zero padding on processors with AVX-512 instructions

    Directory of Open Access Journals (Sweden)

    Bérenger Bramas

    2018-04-01

    Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.

  7. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    Science.gov (United States)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  8. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2013-01-01

    the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives

  9. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation.

    Science.gov (United States)

    Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai

    2016-02-19

    In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.

  10. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Zutao Zhang

    2016-02-01

    Full Text Available In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT is adopted to extract the EEG power spectrum density (PSD. In this step, sparse representation classification combined with k-singular value decomposition (KSVD is firstly introduced in PSD to estimate the driver’s vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.

  11. Turbulent flows over sparse canopies

    Science.gov (United States)

    Sharma, Akshath; García-Mayoral, Ricardo

    2018-04-01

    Turbulent flows over sparse and dense canopies exerting a similar drag force on the flow are investigated using Direct Numerical Simulations. The dense canopies are modelled using a homogeneous drag force, while for the sparse canopy, the geometry of the canopy elements is represented. It is found that on using the friction velocity based on the local shear at each height, the streamwise velocity fluctuations and the Reynolds stress within the sparse canopy are similar to those from a comparable smooth-wall case. In addition, when scaled with the local friction velocity, the intensity of the off-wall peak in the streamwise vorticity for sparse canopies also recovers a value similar to a smooth-wall. This indicates that the sparse canopy does not significantly disturb the near-wall turbulence cycle, but causes its rescaling to an intensity consistent with a lower friction velocity within the canopy. In comparison, the dense canopy is found to have a higher damping effect on the turbulent fluctuations. For the case of the sparse canopy, a peak in the spectral energy density of the wall-normal velocity, and Reynolds stress is observed, which may indicate the formation of Kelvin-Helmholtz-like instabilities. It is also found that a sparse canopy is better modelled by a homogeneous drag applied on the mean flow alone, and not the turbulent fluctuations.

  12. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu; Bibi, Adel Aamer; Ghanem, Bernard

    2016-01-01

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  13. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu

    2016-12-13

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  14. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    Science.gov (United States)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  15. A Projected Conjugate Gradient Method for Sparse Minimax Problems

    DEFF Research Database (Denmark)

    Madsen, Kaj; Jonasson, Kristjan

    1993-01-01

    A new method for nonlinear minimax problems is presented. The method is of the trust region type and based on sequential linear programming. It is a first order method that only uses first derivatives and does not approximate Hessians. The new method is well suited for large sparse problems...... as it only requires that software for sparse linear programming and a sparse symmetric positive definite equation solver are available. On each iteration a special linear/quadratic model of the function is minimized, but contrary to the usual practice in trust region methods the quadratic model is only...... with the method are presented. In fact, we find that the number of iterations required is comparable to that of state-of-the-art quasi-Newton codes....

  16. MULTISCALE SPARSE APPEARANCE MODELING AND SIMULATION OF PATHOLOGICAL DEFORMATIONS

    Directory of Open Access Journals (Sweden)

    Rami Zewail

    2017-08-01

    Full Text Available Machine learning and statistical modeling techniques has drawn much interest within the medical imaging research community. However, clinically-relevant modeling of anatomical structures continues to be a challenging task. This paper presents a novel method for multiscale sparse appearance modeling in medical images with application to simulation of pathological deformations in X-ray images of human spine. The proposed appearance model benefits from the non-linear approximation power of Contourlets and its ability to capture higher order singularities to achieve a sparse representation while preserving the accuracy of the statistical model. Independent Component Analysis is used to extract statistical independent modes of variations from the sparse Contourlet-based domain. The new model is then used to simulate clinically-relevant pathological deformations in radiographic images.

  17. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  18. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Science.gov (United States)

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A Novel 3D Imaging Method for Airborne Downward-Looking Sparse Array SAR Based on Special Squint Model

    Directory of Open Access Journals (Sweden)

    Xiaozhen Ren

    2014-01-01

    Full Text Available Three-dimensional (3D imaging technology based on antenna array is one of the most important 3D synthetic aperture radar (SAR high resolution imaging modes. In this paper, a novel 3D imaging method is proposed for airborne down-looking sparse array SAR based on the imaging geometry and the characteristic of echo signal. The key point of the proposed algorithm is the introduction of a special squint model in cross track processing to obtain accurate focusing. In this special squint model, point targets with different cross track positions have different squint angles at the same range resolution cell, which is different from the conventional squint SAR. However, after theory analysis and formulation deduction, the imaging procedure can be processed with the uniform reference function, and the phase compensation factors and algorithm realization procedure are demonstrated in detail. As the method requires only Fourier transform and multiplications and thus avoids interpolations, it is computationally efficient. Simulations with point scatterers are used to validate the method.

  20. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  1. Single base-resolution methylome of the silkworm reveals a sparse epigenomic map

    DEFF Research Database (Denmark)

    Xiang, Hui; Zhu, Jingde; Chen, Quan

    2010-01-01

    Epigenetic regulation in insects may have effects on diverse biological processes. Here we survey the methylome of a model insect, the silkworm Bombyx mori, at single-base resolution using Illumina high-throughput bisulfite sequencing (MethylC-Seq). We conservatively estimate that 0.11% of genomi...

  2. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  3. Analysis of Neural Oscillations on Drosophila’s Subesophageal Ganglion Based on Approximate Entropy

    Directory of Open Access Journals (Sweden)

    Tian Mei

    2015-10-01

    Full Text Available The suboesophageal ganglion (SOG, which connects to both central and peripheral nerves, is the primary taste-processing center in the Drosophila’s brain. The neural oscillation in this center may be of great research value yet it is rarely reported. This work aims to determine the amount of unique information contained within oscillations of the SOG and describe the variability of these patterns. The approximate entropy (ApEn values of the spontaneous membrane potential (sMP of SOG neurons were calculated in this paper. The arithmetic mean (MA, standard deviation (SDA and the coefficient of variation (CVA of ApEn were proposed as the three statistical indicators to describe the irregularity and complexity of oscillations. The hierarchical clustering method was used to classify them. As a result, the oscillations in SOG were divided into five categories, including: (1 Continuous spike pattern; (2 Mixed oscillation pattern; (3 Spikelet pattern; (4 Busting pattern and (5 Sparse spike pattern. Steady oscillation state has a low level of irregularity, and vice versa. The dopamine stimulation can distinctly cut down the complexity of the mixed oscillation pattern. The current study provides a quantitative method and some critera on mining the information carried in neural oscillations.

  4. Anisotropic interaction rules in circular motions of pigeon flocks: An empirical study based on sparse Bayesian learning

    Science.gov (United States)

    Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao

    2017-08-01

    Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.

  5. Reliable Fault Diagnosis of Rotary Machine Bearings Using a Stacked Sparse Autoencoder-Based Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sohaib

    2018-01-01

    Full Text Available Due to enhanced safety, cost-effectiveness, and reliability requirements, fault diagnosis of bearings using vibration acceleration signals has been a key area of research over the past several decades. Many fault diagnosis algorithms have been developed that can efficiently classify faults under constant speed conditions. However, the performances of these traditional algorithms deteriorate with fluctuations of the shaft speed. In the past couple of years, deep learning algorithms have not only improved the classification performance in various disciplines (e.g., in image processing and natural language processing, but also reduced the complexity of feature extraction and selection processes. In this study, using complex envelope spectra and stacked sparse autoencoder- (SSAE- based deep neural networks (DNNs, a fault diagnosis scheme is developed that can overcome fluctuations of the shaft speed. The complex envelope spectrum made the frequency components associated with each fault type vibrant, hence helping the autoencoders to learn the characteristic features from the given input signals more readily. Moreover, the implementation of SSAE-DNN for bearing fault diagnosis has avoided the need of handcrafted features that are used in traditional fault diagnosis schemes. The experimental results demonstrate that the proposed scheme outperforms conventional fault diagnosis algorithms in terms of fault classification accuracy when tested with variable shaft speed data.

  6. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  7. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  8. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  9. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  10. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  11. Generalized frameworks for first-order evolution inclusions based on Yosida approximations

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2011-04-01

    Full Text Available First, general frameworks for the first-order evolution inclusions are developed based on the A-maximal relaxed monotonicity, and then using the Yosida approximation the solvability of a general class of first-order nonlinear evolution inclusions is investigated. The role the A-maximal relaxed monotonicity is significant in the sense that it not only empowers the first-order nonlinear evolution inclusions but also generalizes the existing Yosida approximations and its characterizations in the current literature.

  12. Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.

    Science.gov (United States)

    Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam

    2018-06-01

    The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.

  13. WE-G-BRD-02: Characterizing Information Loss in a Sparse-Sampling-Based Dynamic MRI Sequence (k-T BLAST) for Lung Motion Monitoring

    International Nuclear Information System (INIS)

    Arai, T; Nofiele, J; Sawant, A

    2015-01-01

    Purpose: Rapid MRI is an attractive, non-ionizing tool for soft-tissue-based monitoring of respiratory motion in thoracic and abdominal radiotherapy. One big challenge is to achieve high temporal resolution while maintaining adequate spatial resolution. K-t BLAST, sparse-sampling and reconstruction sequence based on a-priori information represents a potential solution. In this work, we investigated how much “true” motion information is lost as a-priori information is progressively added for faster imaging. Methods: Lung tumor motions in superior-inferior direction obtained from ten individuals were replayed into an in-house, MRI-compatible, programmable motion platform (50Hz refresh and 100microns precision). Six water-filled 1.5ml tubes were placed on it as fiducial markers. Dynamic marker motion within a coronal slice (FOV: 32×32cm"2, resolution: 0.67×0.67mm"2, slice-thickness: 5mm) was collected on 3.0T body scanner (Ingenia, Philips). Balanced-FFE (TE/TR: 1.3ms/2.5ms, flip-angle: 40degrees) was used in conjunction with k-t BLAST. Each motion was repeated four times as four k-t acceleration factors 1, 2, 5, and 16 (corresponding frame rates were 2.5, 4.7, 9.8, and 19.1Hz, respectively) were compared. For each image set, one average motion trajectory was computed from six marker displacements. Root mean square error (RMS) was used as a metric of spatial accuracy where measured trajectories were compared to original data. Results: Tumor motion was approximately 10mm. The mean(standard deviation) of respiratory rates over ten patients was 0.28(0.06)Hz. Cumulative distributions of tumor motion frequency spectra (0–25Hz) obtained from the patients showed that 90% of motion fell on 3.88Hz or less. Therefore, the frame rate must be a double or higher for accurate monitoring. The RMS errors over patients for k-t factors of 1, 2, 5, and 16 were.10(.04),.17(.04), .21(.06) and.26(.06)mm, respectively. Conclusions: K-t factor of 5 or higher can cover the high

  14. Isotropic 3D cardiac cine MRI allows efficient sparse segmentation strategies based on 3D surface reconstruction.

    Science.gov (United States)

    Odille, Freddy; Bustin, Aurélien; Liu, Shufang; Chen, Bailiang; Vuissoz, Pierre-André; Felblinger, Jacques; Bonnemains, Laurent

    2018-05-01

    Segmentation of cardiac cine MRI data is routinely used for the volumetric analysis of cardiac function. Conventionally, 2D contours are drawn on short-axis (SAX) image stacks with relatively thick slices (typically 8 mm). Here, an acquisition/reconstruction strategy is used for obtaining isotropic 3D cine datasets; reformatted slices are then used to optimize the manual segmentation workflow. Isotropic 3D cine datasets were obtained from multiple 2D cine stacks (acquired during free-breathing in SAX and long-axis (LAX) orientations) using nonrigid motion correction (cine-GRICS method) and super-resolution. Several manual segmentation strategies were then compared, including conventional SAX segmentation, LAX segmentation in three views only, and combinations of SAX and LAX slices. An implicit B-spline surface reconstruction algorithm is proposed to reconstruct the left ventricular cavity surface from the sparse set of 2D contours. All tested sparse segmentation strategies were in good agreement, with Dice scores above 0.9 despite using fewer slices (3-6 sparse slices instead of 8-10 contiguous SAX slices). When compared to independent phase-contrast flow measurements, stroke volumes computed from four or six sparse slices had slightly higher precision than conventional SAX segmentation (error standard deviation of 5.4 mL against 6.1 mL) at the cost of slightly lower accuracy (bias of -1.2 mL against 0.2 mL). Functional parameters also showed a trend to improved precision, including end-diastolic volumes, end-systolic volumes, and ejection fractions). The postprocessing workflow of 3D isotropic cardiac imaging strategies can be optimized using sparse segmentation and 3D surface reconstruction. Magn Reson Med 79:2665-2675, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  16. Electronic and Optical Properties of CuO Based on DFT+U and GW Approximation

    International Nuclear Information System (INIS)

    Ahmad, F; Agusta, M K; Dipojono, H K

    2016-01-01

    We report ab initio calculations of electronic structure and optical properties of monoclinic CuO based on DFT+U and GW approximation. CuO is an antiferromagnetic material with strong electron correlations. Our calculation shows that DFT+U and GW approximation sufficiently reliable to investigate the material properties of CuO. The calculated band gap of DFT+U for reasonable value of U slightly underestimates. The use of GW approximation requires adjustment of U value to get realistic result. Hybridization Cu 3dxz, 3dyz with O 2p plays an important role in the formation of band gap. The calculated optical properties based on DFT+U and GW corrections by solving Bethe-Salpeter are in good agreement with the calculated electronic properties and the experimental result. (paper)

  17. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  18. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  19. Sparse distributed memory

    Science.gov (United States)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  20. Numerical solution of large sparse linear systems

    International Nuclear Information System (INIS)

    Meurant, Gerard; Golub, Gene.

    1982-02-01

    This note is based on one of the lectures given at the 1980 CEA-EDF-INRIA Numerical Analysis Summer School whose aim is the study of large sparse linear systems. The main topics are solving least squares problems by orthogonal transformation, fast Poisson solvers and solution of sparse linear system by iterative methods with a special emphasis on preconditioned conjuguate gradient method [fr

  1. Sparse Channel Estimation Including the Impact of the Transceiver Filters with Application to OFDM

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Pedersen, Niels Lovmand; Manchón, Carles Navarro

    2014-01-01

    Traditionally, the dictionary matrices used in sparse wireless channel estimation have been based on the discrete Fourier transform, following the assumption that the channel frequency response (CFR) can be approximated as a linear combination of a small number of multipath components, each one......) and receive (demodulation) filters. Hence, the assumption of the CFR being sparse in the canonical Fourier dictionary may no longer hold. In this work, we derive a signal model and subsequently a novel dictionary matrix for sparse estimation that account for the impact of transceiver filters. Numerical...... results obtained in an OFDM transmission scenario demonstrate the superior accuracy of a sparse estimator that uses our proposed dictionary rather than the classical Fourier dictionary, and its robustness against a mismatch in the assumed transmit filter characteristics....

  2. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. NIMROD: a program for inference via a normal approximation of the posterior in models with random effects based on ordinary differential equations.

    Science.gov (United States)

    Prague, Mélanie; Commenges, Daniel; Guedj, Jérémie; Drylewicz, Julia; Thiébaut, Rodolphe

    2013-08-01

    Models based on ordinary differential equations (ODE) are widespread tools for describing dynamical systems. In biomedical sciences, data from each subject can be sparse making difficult to precisely estimate individual parameters by standard non-linear regression but information can often be gained from between-subjects variability. This makes natural the use of mixed-effects models to estimate population parameters. Although the maximum likelihood approach is a valuable option, identifiability issues favour Bayesian approaches which can incorporate prior knowledge in a flexible way. However, the combination of difficulties coming from the ODE system and from the presence of random effects raises a major numerical challenge. Computations can be simplified by making a normal approximation of the posterior to find the maximum of the posterior distribution (MAP). Here we present the NIMROD program (normal approximation inference in models with random effects based on ordinary differential equations) devoted to the MAP estimation in ODE models. We describe the specific implemented features such as convergence criteria and an approximation of the leave-one-out cross-validation to assess the model quality of fit. In pharmacokinetics models, first, we evaluate the properties of this algorithm and compare it with FOCE and MCMC algorithms in simulations. Then, we illustrate NIMROD use on Amprenavir pharmacokinetics data from the PUZZLE clinical trial in HIV infected patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  5. Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization

    Science.gov (United States)

    Kowalczyk, L.; Goszczynska, H.; Zalewska, E.; Bajera, A.; Krolicki, L.

    2014-04-01

    This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s) missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.

  6. Invisible Base Electrode Coordinates Approximation for Simultaneous SPECT and EEG Data Visualization

    Directory of Open Access Journals (Sweden)

    Kowalczyk L.

    2014-04-01

    Full Text Available This work was performed as part of a larger research concerning the feasibility of improving the localization of epileptic foci, as compared to the standard SPECT examination, by applying the technique of EEG mapping. The presented study extends our previous work on the development of a method for superposition of SPECT images and EEG 3D maps when these two examinations are performed simultaneously. Due to the lack of anatomical data in SPECT images it is a much more difficult task than in the case of MRI/EEG study where electrodes are visible in morphological images. Using the appropriate dose of radioisotope we mark five base electrodes to make them visible in the SPECT image and then approximate the coordinates of the remaining electrodes using properties of the 10-20 electrode placement system and the proposed nine-ellipses model. This allows computing a sequence of 3D EEG maps spanning on all electrodes. It happens, however, that not all five base electrodes can be reliably identified in SPECT data. The aim of the current study was to develop a method for determining the coordinates of base electrode(s missing in the SPECT image. The algorithm for coordinates approximation has been developed and was tested on data collected for three subjects with all visible electrodes. To increase the accuracy of the approximation we used head surface models. Freely available model from Oostenveld research based on data from SPM package and our own model based on data from our EEG/SPECT studies were used. For data collected in four cases with one electrode not visible we compared the invisible base electrode coordinates approximation for Oostenveld and our models. The results vary depending on the missing electrode placement, but application of the realistic head model significantly increases the accuracy of the approximation.

  7. Design of reciprocal unit based on the Newton-Raphson approximation

    DEFF Research Database (Denmark)

    Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael

    A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...

  8. Binary-collision-approximation-based simulation of noble gas irradiation to tungsten materials

    International Nuclear Information System (INIS)

    Saito, Seiki; Takayama, Arimichi; Ito, Atsushi M.; Nakamura, Hiroaki

    2013-01-01

    To reveal the possibility of fuzz formation of tungsten material under noble gas irradiation, helium, neon, and argon atom injections into tungsten materials are performed by binary-collision-approximation-based simulation. The penetration depth is strongly depends on the structure of the target material. Therefore, the penetration depth for amorphous and bcc crystalline structure is carefully investigated in this paper

  9. A Comparison between Model Base Hardconstrain, Bandlimited, and Sparse-Spike Seismic Inversion: New Insights for CBM Reservoir Modelling on Muara Enim Formation, South Sumatra

    Science.gov (United States)

    Mohamad Noor, Faris; Adipta, Agra

    2018-03-01

    Coal Bed Methane (CBM) as a newly developed resource in Indonesia is one of the alternatives to relieve Indonesia’s dependencies on conventional energies. Coal resource of Muara Enim Formation is known as one of the prolific reservoirs in South Sumatra Basin. Seismic inversion and well analysis are done to determine the coal seam characteristics of Muara Enim Formation. This research uses three inversion methods, which are: model base hard- constrain, bandlimited, and sparse-spike inversion. Each type of seismic inversion has its own advantages to display the coal seam and its characteristic. Interpretation result from the analysis data shows that the Muara Enim coal seam has 20 (API) gamma ray value, 1 (gr/cc) – 1.4 (gr/cc) from density log, and low AI cutoff value range between 5000-6400 (m/s)*(g/cc). The distribution of coal seam is laterally thinning northwest to southeast. Coal seam is seen biasedly on model base hard constraint inversion and discontinued on band-limited inversion which isn’t similar to the geological model. The appropriate AI inversion is sparse spike inversion which has 0.884757 value from cross plot inversion as the best correlation value among the chosen inversion methods. Sparse Spike inversion its self-has high amplitude as a proper tool to identify coal seam continuity which commonly appears as a thin layer. Cross-sectional sparse spike inversion shows that there are possible new boreholes in CDP 3662-3722, CDP 3586-3622, and CDP 4004-4148 which is seen in seismic data as a thick coal seam.

  10. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  11. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    Science.gov (United States)

    2016-09-01

    412TW-PA-16437 Ridit Analysis for Cooper -Harper & other Ordinal Ratings for Sparse Data – A...2016 2. REPORT TYPE TECHNICAL PAPER 3. DATES COVERED (From - To) 09-19-2016 4. TITLE AND SUBTITLE RIDIT ANALYSIS FOR COOPER -HARPER & OTHER...opinion rankings, are common in many areas of application. In the Air Force, Cooper - Harper ratings are used extensively for the assessment of Flying

  12. Adaptive Sliding Mode Control of MEMS Gyroscope Based on Neural Network Approximation

    Directory of Open Access Journals (Sweden)

    Yuzheng Yang

    2014-01-01

    Full Text Available An adaptive sliding controller using radial basis function (RBF network to approximate the unknown system dynamics microelectromechanical systems (MEMS gyroscope sensor is proposed. Neural controller is proposed to approximate the unknown system model and sliding controller is employed to eliminate the approximation error and attenuate the model uncertainties and external disturbances. Online neural network (NN weight tuning algorithms, including correction terms, are designed based on Lyapunov stability theory, which can guarantee bounded tracking errors as well as bounded NN weights. The tracking error bound can be made arbitrarily small by increasing a certain feedback gain. Numerical simulation for a MEMS angular velocity sensor is investigated to verify the effectiveness of the proposed adaptive neural control scheme and demonstrate the satisfactory tracking performance and robustness.

  13. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  14. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  15. THREE-MOMENT BASED APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING SYSTEMS

    Directory of Open Access Journals (Sweden)

    T. I. Aliev

    2014-03-01

    Full Text Available The paper deals with the problem of approximation of probability distributions of random variables defined in positive area of real numbers with coefficient of variation different from unity. While using queueing systems as models for computer networks, calculation of characteristics is usually performed at the level of expectation and variance. At the same time, one of the main characteristics of multimedia data transmission quality in computer networks is delay jitter. For jitter calculation the function of packets time delay distribution should be known. It is shown that changing the third moment of distribution of packets delay leads to jitter calculation difference in tens or hundreds of percent, with the same values of the first two moments – expectation value and delay variation coefficient. This means that delay distribution approximation for the calculation of jitter should be performed in accordance with the third moment of delay distribution. For random variables with coefficients of variation greater than unity, iterative approximation algorithm with hyper-exponential two-phase distribution based on three moments of approximated distribution is offered. It is shown that for random variables with coefficients of variation less than unity, the impact of the third moment of distribution becomes negligible, and for approximation of such distributions Erlang distribution with two first moments should be used. This approach gives the possibility to obtain upper bounds for relevant characteristics, particularly, the upper bound of delay jitter.

  16. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  17. Least 1-Norm Pole-Zero Modeling with Sparse Deconvolution for Speech Analysis

    DEFF Research Database (Denmark)

    Shi, Liming; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    In this paper, we present a speech analysis method based on sparse pole-zero modeling of speech. Instead of using the all-pole model to approximate the speech production filter, a pole-zero model is used for the combined effect of the vocal tract; radiation at the lips and the glottal pulse shape...

  18. Efficient coordinated recovery of sparse channels in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-01-01

    This paper addresses the problem of estimating sparse channels in massive MIMO-OFDM systems. Most wireless channels are sparse in nature with large delay spread. In addition, these channels as observed by multiple antennas in a neighborhood have approximately common support. The sparsity and common support properties are attractive when it comes to the efficient estimation of large number of channels in massive MIMO systems. Moreover, to avoid pilot contamination and to achieve better spectral efficiency, it is important to use a small number of pilots. We present a novel channel estimation approach which utilizes the sparsity and common support properties to estimate sparse channels and requires a small number of pilots. Two algorithms based on this approach have been developed that perform Bayesian estimates of sparse channels even when the prior is non-Gaussian or unknown. Neighboring antennas share among each other their beliefs about the locations of active channel taps to perform estimation. The coordinated approach improves channel estimates and also reduces the required number of pilots. Further improvement is achieved by the data-aided version of the algorithm. Extensive simulation results are provided to demonstrate the performance of the proposed algorithms.

  19. Sparse PCA with Oracle Property.

    Science.gov (United States)

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  20. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    Science.gov (United States)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  1. TH-E-17A-02: High-Pitch and Sparse-View Helical 4D CT Via Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Guo, M; Nam, H; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: 4D CT is routinely performed during radiation therapy treatment planning of thoracic and abdominal cancers. Compared with the cine mode, the helical mode is advantageous in temporal resolution. However, a low pitch (∼0.1) for 4D CT imaging is often required instead of the standard pitch (∼1) for static imaging, since standard image reconstruction based on analytic method requires the low-pitch scanning in order to satisfy the data sufficient condition when reconstructing each temporal frame individually. In comparison, the flexible iterative method enables the reconstruction of all temporal frames simultaneously, so that the image similarity among frames can be utilized to possibly perform high-pitch and sparse-view helical 4D CT imaging. The purpose of this work is to investigate such an exciting possibility for faster imaging with lower dose. Methods: A key for highpitch and sparse-view helical 4D CT imaging is the simultaneous reconstruction of all temporal frames using the prior that temporal frames are continuous along the temporal direction. In this work, such a prior is regularized through the sparsity transform based on spatiotemporal tensor framelet (TF) as a multilevel and high-order extension of total variation transform. Moreover, GPU-based fast parallel computing of X-ray transform and its adjoint together with split Bregman method is utilized for solving the 4D image reconstruction problem efficiently and accurately. Results: The simulation studies based on 4D NCAT phantoms were performed with various pitches (i.e., 0.1, 0.2, 0.5, and 1) and sparse views (i.e., 400 views per rotation instead of standard >2000 views per rotation), using 3D iterative individual reconstruction method based on 3D TF and 4D iterative simultaneous reconstruction method based on 4D TF respectively. Conclusion: The proposed TF-based simultaneous 4D image reconstruction method enables high-pitch and sparse-view helical 4D CT with lower dose and faster speed

  2. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    Science.gov (United States)

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  3. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  4. Structural Sparse Tracking

    KAUST Repository

    Zhang, Tianzhu; Yang, Ming-Hsuan; Ahuja, Narendra; Ghanem, Bernard; Yan, Shuicheng; Xu, Changsheng; Liu, Si

    2015-01-01

    candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs

  5. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner

    Directory of Open Access Journals (Sweden)

    Yubo Wang

    2017-06-01

    Full Text Available It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC. In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976 ratio and outperforms existing methods such as short-time Fourier transfrom (STFT, continuous Wavelet transform (CWT and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  6. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    Science.gov (United States)

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  7. SPATIAL-SPECTRAL CLASSIFICATION BASED ON THE UNSUPERVISED CONVOLUTIONAL SPARSE AUTO-ENCODER FOR HYPERSPECTRAL REMOTE SENSING IMAGERY

    Directory of Open Access Journals (Sweden)

    X. Han

    2016-06-01

    Full Text Available Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE. Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI datasets – Pavia University dataset and the Kennedy Space Centre (KSC dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  8. Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning.

    Science.gov (United States)

    Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve

    2017-12-01

    In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.

  9. The generalized successive approximation and Padé Approximants method for solving an elasticity problem of based on the elastic ground with variable coefficients

    Directory of Open Access Journals (Sweden)

    Mustafa Bayram

    2017-01-01

    Full Text Available In this study, we have applied a generalized successive numerical technique to solve the elasticity problem of based on the elastic ground with variable coefficient. In the first stage, we have calculated the generalized successive approximation of being given BVP and in the second stage we have transformed it into Padé series. At the end of study a test problem has been given to clarify the method.

  10. SparseM: A Sparse Matrix Package for R *

    Directory of Open Access Journals (Sweden)

    Roger Koenker

    2003-02-01

    Full Text Available SparseM provides some basic R functionality for linear algebra with sparse matrices. Use of the package is illustrated by a family of linear model fitting functions that implement least squares methods for problems with sparse design matrices. Significant performance improvements in memory utilization and computational speed are possible for applications involving large sparse matrices.

  11. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  12. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  13. Approximation-Based Discrete-Time Adaptive Position Tracking Control for Interior Permanent Magnet Synchronous Motors.

    Science.gov (United States)

    Yu, Jinpeng; Shi, Peng; Yu, Haisheng; Chen, Bing; Lin, Chong

    2015-07-01

    This paper considers the problem of discrete-time adaptive position tracking control for a interior permanent magnet synchronous motor (IPMSM) based on fuzzy-approximation. Fuzzy logic systems are used to approximate the nonlinearities of the discrete-time IPMSM drive system which is derived by direct discretization using Euler method, and a discrete-time fuzzy position tracking controller is designed via backstepping approach. In contrast to existing results, the advantage of the scheme is that the number of the adjustable parameters is reduced to two only and the problem of coupling nonlinearity can be overcome. It is shown that the proposed discrete-time fuzzy controller can guarantee the tracking error converges to a small neighborhood of the origin and all the signals are bounded. Simulation results illustrate the effectiveness and the potentials of the theoretic results obtained.

  14. Solving multi-objective job shop problem using nature-based algorithms: new Pareto approximation features

    Directory of Open Access Journals (Sweden)

    Jarosław Rudy

    2015-01-01

    Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.

  15. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  16. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  17. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    Science.gov (United States)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  18. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  19. Better Size Estimation for Sparse Matrix Products

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...

  20. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    Science.gov (United States)

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  1. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    Science.gov (United States)

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  2. Simple analytical approximation for rotationally inelastic rate constants based on the energy corrected sudden scaling law

    International Nuclear Information System (INIS)

    Smith, N.; Pritchard, D.E.

    1981-01-01

    We have recently demonstrated that the energy corrected sudden (ECS) scaling law of De Pristo et al. when conbined with the power law assumption for the basis rates k/sub l/→0proportional[l(l+1)]/sup -g/ can accurately fit a wide body of rotational energy transfer data. We develop a simple and accurate approximation to this fitting law, and in addition mathematically show the connection between it and our earlier proposed energy based law which also has been successful in describing both theoretical and experimental data on rotationally inelastic collisions

  3. Molecular Solid EOS based on Quasi-Harmonic Oscillator approximation for phonons

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    A complete equation of state (EOS) for a molecular solid is derived utilizing a Helmholtz free energy. Assuming that the solid is nonconducting, phonon excitations dominate the specific heat. Phonons are approximated as independent quasi-harmonic oscillators with vibrational frequencies depending on the specific volume. The model is suitable for calibrating an EOS based on isothermal compression data and infrared/Raman spectroscopy data from high pressure measurements utilizing a diamond anvil cell. In contrast to a Mie-Gruneisen EOS developed for an atomic solid, the specific heat and Gruneisen coefficient depend on both density and temperature.

  4. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  5. Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.

    Science.gov (United States)

    Fang, Hongyan; Zhang, Hong; Yang, Yaning

    2016-07-01

    Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.

  6. Gene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information.

    Science.gov (United States)

    Jia, Bin; Wang, Xiaodong

    2013-12-17

    : The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.

  7. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir; Al-Naffouri, Tareq Y.

    2013-01-01

    priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal

  8. Strips of hourly power options. Approximate hedging using average-based forward contracts

    International Nuclear Information System (INIS)

    Lindell, Andreas; Raab, Mikael

    2009-01-01

    We study approximate hedging strategies for a contingent claim consisting of a strip of independent hourly power options. The payoff of the contingent claim is a sum of the contributing hourly payoffs. As there is no forward market for specific hours, the fundamental problem is to find a reasonable hedge using exchange-traded forward contracts, e.g. average-based monthly contracts. The main result is a simple dynamic hedging strategy that reduces a significant part of the variance. The idea is to decompose the contingent claim into mathematically tractable components and to use empirical estimations to derive hedging deltas. Two benefits of the method are that the technique easily extends to more complex power derivatives and that only a few parameters need to be estimated. The hedging strategy based on the decomposition technique is compared with dynamic delta hedging strategies based on local minimum variance hedging, using a correlated traded asset. (author)

  9. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  10. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  11. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  12. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  13. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  14. A Sparse Dictionary Learning-Based Adaptive Patch Inpainting Method for Thick Clouds Removal from High-Spatial Resolution Remote Sensing Imagery.

    Science.gov (United States)

    Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi

    2017-09-15

    Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.

  15. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  16. Bilinear Approximate Model-Based Robust Lyapunov Control for Parabolic Distributed Collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2016-11-09

    This brief addresses the control problem of distributed parabolic solar collectors in order to maintain the field outlet temperature around a desired level. The objective is to design an efficient controller to force the outlet fluid temperature to track a set reference despite the unpredictable varying working conditions. In this brief, a bilinear model-based robust Lyapunov control is proposed to achieve the control objectives with robustness to the environmental changes. The bilinear model is a reduced order approximate representation of the solar collector, which is derived from the hyperbolic distributed equation describing the heat transport dynamics by means of a dynamical Gaussian interpolation. Using the bilinear approximate model, a robust control strategy is designed applying Lyapunov stability theory combined with a phenomenological representation of the system in order to stabilize the tracking error. On the basis of the error analysis, simulation results show good performance of the proposed controller, in terms of tracking accuracy and convergence time, with limited measurement even under unfavorable working conditions. Furthermore, the presented work is of interest for a large category of dynamical systems knowing that the solar collector is representative of physical systems involving transport phenomena constrained by unknown external disturbances.

  17. Combined Forecasting Method of Landslide Deformation Based on MEEMD, Approximate Entropy, and WLS-SVM

    Directory of Open Access Journals (Sweden)

    Shaofeng Xie

    2017-01-01

    Full Text Available Given the chaotic characteristics of the time series of landslides, a new method based on modified ensemble empirical mode decomposition (MEEMD, approximate entropy and the weighted least square support vector machine (WLS-SVM was proposed. The method mainly started from the chaotic sequence of time-frequency analysis and improved the model performance as follows: first a deformation time series was decomposed into a series of subsequences with significantly different complexity using MEEMD. Then the approximate entropy method was used to generate a new subsequence for the combination of subsequences with similar complexity, which could effectively concentrate the component feature information and reduce the computational scale. Finally the WLS-SVM prediction model was established for each new subsequence. At the same time, phase space reconstruction theory and the grid search method were used to select the input dimension and the optimal parameters of the model, and then the superposition of each predicted value was the final forecasting result. Taking the landslide deformation data of Danba as an example, the experiments were carried out and compared with wavelet neural network, support vector machine, least square support vector machine and various combination schemes. The experimental results show that the algorithm has high prediction accuracy. It can ensure a better prediction effect even in landslide deformation periods of rapid fluctuation, and it can also better control the residual value and effectively reduce the error interval.

  18. Optimal approximations for risk measures of sums of lognormals based on conditional expectations

    Science.gov (United States)

    Vanduffel, S.; Chen, X.; Dhaene, J.; Goovaerts, M.; Henrard, L.; Kaas, R.

    2008-11-01

    In this paper we investigate the approximations for the distribution function of a sum S of lognormal random variables. These approximations are obtained by considering the conditional expectation E[S|[Lambda

  19. Remote Sensing Scene Classification Based on Convolutional Neural Networks Pre-Trained Using Attention-Guided Sparse Filters

    Directory of Open Access Journals (Sweden)

    Jingbo Chen

    2018-02-01

    Full Text Available Semantic-level land-use scene classification is a challenging problem, in which deep learning methods, e.g., convolutional neural networks (CNNs, have shown remarkable capacity. However, a lack of sufficient labeled images has proved a hindrance to increasing the land-use scene classification accuracy of CNNs. Aiming at this problem, this paper proposes a CNN pre-training method under the guidance of a human visual attention mechanism. Specifically, a computational visual attention model is used to automatically extract salient regions in unlabeled images. Then, sparse filters are adopted to learn features from these salient regions, with the learnt parameters used to initialize the convolutional layers of the CNN. Finally, the CNN is further fine-tuned on labeled images. Experiments are performed on the UCMerced and AID datasets, which show that when combined with a demonstrative CNN, our method can achieve 2.24% higher accuracy than a plain CNN and can obtain an overall accuracy of 92.43% when combined with AlexNet. The results indicate that the proposed method can effectively improve CNN performance using easy-to-access unlabeled images and thus will enhance the performance of land-use scene classification especially when a large-scale labeled dataset is unavailable.

  20. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  1. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  2. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  3. Aft-body loading function for penetrators based on the spherical cavity-expansion approximation.

    Energy Technology Data Exchange (ETDEWEB)

    Longcope, Donald B., Jr.; Warren, Thomas Lynn; Duong, Henry

    2009-12-01

    In this paper we develop an aft-body loading function for penetration simulations that is based on the spherical cavity-expansion approximation. This loading function assumes that there is a preexisting cavity of radius a{sub o} before the expansion occurs. This causes the radial stress on the cavity surface to be less than what is obtained if the cavity is opened from a zero initial radius. This in turn causes less resistance on the aft body as it penetrates the target which allows for greater rotation of the penetrator. Results from simulations are compared with experimental results for oblique penetration into a concrete target with an unconfined compressive strength of 23 MPa.

  4. Behavioral features recognition and oestrus detection based on fast approximate clustering algorithm in dairy cows

    Science.gov (United States)

    Tian, Fuyang; Cao, Dong; Dong, Xiaoning; Zhao, Xinqiang; Li, Fade; Wang, Zhonghua

    2017-06-01

    Behavioral features recognition was an important effect to detect oestrus and sickness in dairy herds and there is a need for heat detection aid. The detection method was based on the measure of the individual behavioural activity, standing time, and temperature of dairy using vibrational sensor and temperature sensor in this paper. The data of behavioural activity index, standing time, lying time and walking time were sent to computer by lower power consumption wireless communication system. The fast approximate K-means algorithm (FAKM) was proposed to deal the data of the sensor for behavioral features recognition. As a result of technical progress in monitoring cows using computers, automatic oestrus detection has become possible.

  5. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Directory of Open Access Journals (Sweden)

    Gianluca Valentino

    2014-02-01

    Full Text Available Collimators with embedded beam position monitor (BPM button electrodes will be installed in the Large Hadron Collider (LHC during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  6. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Science.gov (United States)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  7. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    International Nuclear Information System (INIS)

    Wang, Yan; Zhou, Jiliu; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Shen, Dinggang; Wu, Xi; Lalush, David S; Lin, Weili

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. (paper)

  8. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  9. Sparse Learning with Stochastic Composite Optimization.

    Science.gov (United States)

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  10. Digital-Control-Based Approximation of Optimal Wave Disturbances Attenuation for Nonlinear Offshore Platforms

    Directory of Open Access Journals (Sweden)

    Xiao-Fang Zhong

    2017-12-01

    Full Text Available The irregular wave disturbance attenuation problem for jacket-type offshore platforms involving the nonlinear characteristics is studied. The main contribution is that a digital-control-based approximation of optimal wave disturbances attenuation controller (AOWDAC is proposed based on iteration control theory, which consists of a feedback item of offshore state, a feedforward item of wave force and a nonlinear compensated component with iterative sequences. More specifically, by discussing the discrete model of nonlinear offshore platform subject to wave forces generated from the Joint North Sea Wave Project (JONSWAP wave spectrum and linearized wave theory, the original wave disturbances attenuation problem is formulated as the nonlinear two-point-boundary-value (TPBV problem. By introducing two vector sequences of system states and nonlinear compensated item, the solution of introduced nonlinear TPBV problem is obtained. Then, a numerical algorithm is designed to realize the feasibility of AOWDAC based on the deviation of performance index between the adjacent iteration processes. Finally, applied the proposed AOWDAC to a jacket-type offshore platform in Bohai Bay, the vibration amplitudes of the displacement and the velocity, and the required energy consumption can be reduced significantly.

  11. Landmark-based elastic registration using approximating thin-plate splines.

    Science.gov (United States)

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  12. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  13. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn [Institute of Natural Sciences, Department of Mathematics, and MOE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China); Lin, Guang, E-mail: lin491@purdue.edu [Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States); Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Yang, Xu, E-mail: xuyang@math.ucsb.edu [Department of Mathematics, University of California, Santa Barbara, CA 93106 (United States)

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by three steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.

  14. SPARSE FARADAY ROTATION MEASURE SYNTHESIS

    International Nuclear Information System (INIS)

    Andrecut, M.; Stil, J. M.; Taylor, A. R.

    2012-01-01

    Faraday rotation measure synthesis is a method for analyzing multichannel polarized radio emissions, and it has emerged as an important tool in the study of Galactic and extragalactic magnetic fields. The method requires the recovery of the Faraday dispersion function from measurements restricted to limited wavelength ranges, which is an ill-conditioned deconvolution problem. Here, we discuss a recovery method that assumes a sparse approximation of the Faraday dispersion function in an overcomplete dictionary of functions. We discuss the general case when both thin and thick components are included in the model, and we present the implementation of a greedy deconvolution algorithm. We illustrate the method with several numerical simulations that emphasize the effect of the covered range and sampling resolution in the Faraday depth space, and the effect of noise on the observed data.

  15. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    KAUST Repository

    Jiang, Lijian

    2009-10-02

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information is expensive. In this paper, we propose the use of approximate global information based on partial upscaling. A requirement for partial homogenization is to capture long-range (non-local) effects present in the fine-scale solution, while homogenizing some of the smallest scales. The local information at these smallest scales is captured in the computation of basis functions. Thus, the proposed approach allows us to avoid the computations at the scales that can be homogenized. This results in coarser problems for the computation of global fields. We analyze the convergence of the proposed method. Mathematical formalism is introduced, which allows estimating the errors due to small scales that are homogenized. The proposed method is applied to simulate two-phase flows in heterogeneous porous media. Numerical results are presented for various permeability fields, including those generated using two-point correlation functions and channelized permeability fields from the SPE Comparative Project (Christie and Blunt, SPE Reserv Evalu Eng 4:308-317, 2001). We consider simple cases where one can identify the scales that can be homogenized. For more general cases, we suggest the use of upscaling on the coarse grid with the size smaller than the target coarse grid where multiscale basis functions are constructed. This intermediate coarse grid renders a partially upscaled solution that contains essential non-local information. Numerical examples demonstrate that the use of approximate global information provides better accuracy than purely local multiscale methods. © 2009 Springer Science+Business Media B.V.

  16. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  17. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit

    2012-06-01

    The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

  18. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  19. Approximate soil-structure interaction with separation of base mat from soil (lifting-off)

    International Nuclear Information System (INIS)

    Wolf, J.P.

    1975-01-01

    In reactor buildings having a sheild-building (outer concrete shell) with a large mass, which is particularly the case if the plant is designed for airplane crash, large over-turning moments are developed by earthquake loading. In this paper, the standard linear elastic half-space theory is used in the soil-structure interaction model. For a circular base mat, if the overturning moment exceeds the product of the normal force (dead weight minus the effect of the vertical earthquake) and one-third of the radius, then tension will occur in the area of contact, assuming distribution of stress as in the static case. For a strip foundation the same occurs if the eccentricity of the normal force exceeds a quarter of the total width. As tension is incompatible with the constitutive law of soils, the base mat will become partially separated from the foundation. Assming that only normal stresses in compression and corresponding shear stresses (friction) can occur in the area of contact, a method of analyzing soil-structure interaction including lifting-off is derived, which otherwise is based on elastic behaviour of the soil. First a rigorous iterative procedure is outlined based on (complex) dynamic influence matrices of displacements on the surface of an elastic half-space at a certain distance from a rigid disc or strip. A similar, approximate method is then developed which is used throughout the paper. As an example the dynamic response of the reactor building of a 1000 Megawatt plant to earthquake motion is calculated. The results of the analysis, including lift-off, are compared to those of the linear case. (Auth.)

  20. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim

    2018-02-28

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  1. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim

    2017-07-30

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  2. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2018-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  3. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2017-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  4. Multichannel Signals Reconstruction Based on Tunable Q-Factor Wavelet Transform-Morphological Component Analysis and Sparse Bayesian Iteration for Rotating Machines

    Directory of Open Access Journals (Sweden)

    Qing Li

    2018-04-01

    Full Text Available High-speed remote transmission and large-capacity data storage are difficult issues in signals acquisition of rotating machines condition monitoring. To address these concerns, a novel multichannel signals reconstruction approach based on tunable Q-factor wavelet transform-morphological component analysis (TQWT-MCA and sparse Bayesian iteration algorithm combined with step-impulse dictionary is proposed under the frame of compressed sensing (CS. To begin with, to prevent the periodical impulses loss and effectively separate periodical impulses from the external noise and additive interference components, the TQWT-MCA method is introduced to divide the raw vibration signal into low-resonance component (LRC, i.e., periodical impulses and high-resonance component (HRC, thus, the periodical impulses are preserved effectively. Then, according to the amplitude range of generated LRC, the step-impulse dictionary atom is designed to match the physical structure of periodical impulses. Furthermore, the periodical impulses and HRC are reconstructed by the sparse Bayesian iteration combined with step-impulse dictionary, respectively, finally, the final reconstructed raw signals are obtained by adding the LRC and HRC, meanwhile, the fidelity of the final reconstructed signals is tested by the envelop spectrum and error analysis, respectively. In this work, the proposed algorithm is applied to simulated signal and engineering multichannel signals of a gearbox with multiple faults. Experimental results demonstrate that the proposed approach significantly improves the reconstructive accuracy compared with the state-of-the-art methods such as non-convex Lq (q = 0.5 regularization, spatiotemporal sparse Bayesian learning (SSBL and L1-norm, etc. Additionally, the processing time, i.e., speed of storage and transmission has increased dramatically, more importantly, the fault characteristics of the gearbox with multiple faults are detected and saved, i.e., the

  5. Sparse inpainting and isotropy

    Energy Technology Data Exchange (ETDEWEB)

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Marinucci, Domenico; Cammarota, Valentina [Department of Mathematics, University of Rome Tor Vergata, via della Ricerca Scientifica 1, Roma, 00133 (Italy); Wandelt, Benjamin D., E-mail: s.feeney@imperial.ac.uk, E-mail: marinucc@axp.mat.uniroma2.it, E-mail: jason.mcewen@ucl.ac.uk, E-mail: h.peiris@ucl.ac.uk, E-mail: wandelt@iap.fr, E-mail: cammarot@axp.mat.uniroma2.it [Kavli Institute for Theoretical Physics, Kohn Hall, University of California, 552 University Road, Santa Barbara, CA, 93106 (United States)

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  6. Minimal-Approximation-Based Distributed Consensus Tracking of a Class of Uncertain Nonlinear Multiagent Systems With Unknown Control Directions.

    Science.gov (United States)

    Choi, Yun Ho; Yoo, Sung Jin

    2017-03-28

    A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.

  7. Sparse matrix test collections

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  8. An approximate-reasoning-based method for screening flammable gas tanks

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    1998-03-01

    High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995)

  9. Speeding up the flash calculations in two-phase compositional flow simulations - The application of sparse grids

    KAUST Repository

    Wu, Yuanqing

    2015-03-01

    Flash calculations have become a performance bottleneck in the simulation of compositional flow in subsurface reservoirs. We apply a sparse grid surrogate model to substitute the flash calculation and thus try to remove the bottleneck from the reservoir simulation. So instead of doing a flash calculation in each time step of the simulation, we just generate a sparse grid approximation of all possible results of the flash calculation before the reservoir simulation. Then we evaluate the constructed surrogate model to approximate the values of the flash calculation results from this surrogate during the simulations. The execution of the true flash calculation has been shifted from the online phase during the simulation to the offline phase before the simulation. Sparse grids are known to require only few unknowns in order to obtain good approximation qualities. In conjunction with local adaptivity, sparse grids ensure that the accuracy of the surrogate is acceptable while keeping the memory usage small by only storing a minimal amount of values for the surrogate. The accuracy of the sparse grid surrogate during the reservoir simulation is compared to the accuracy of using a surrogate based on regular Cartesian grids and the original flash calculation. The surrogate model improves the speed of the flash calculations and the simulation of the whole reservoir. In an experiment, it is shown that the speed of the online flash calculations is increased by about 2000 times and as a result the speed of the reservoir simulations has been enhanced by 21 times in the best conditions.

  10. Fuzzy stochastic generalized reliability studies on embankment systems based on first-order approximation theorem

    Directory of Open Access Journals (Sweden)

    Wang Yajun

    2008-12-01

    Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.

  11. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  12. Assessment of correlation energies based on the random-phase approximation

    International Nuclear Information System (INIS)

    Paier, Joachim; Ren, Xinguo; Rinke, Patrick; Scheffler, Matthias; Scuseria, Gustavo E; Grüneis, Andreas; Kresse, Georg

    2012-01-01

    The random-phase approximation to the ground state correlation energy (RPA) in combination with exact exchange (EX) has brought the Kohn-Sham (KS) density functional theory one step closer towards a universal, ‘general purpose first-principles method’. In an effort to systematically assess the influence of several correlation energy contributions beyond RPA, this paper presents dissociation energies of small molecules and solids, activation energies for hydrogen transfer and non-hydrogen transfer reactions, as well as reaction energies for a number of common test sets. We benchmark EX + RPA and several flavors of energy functionals going beyond it: second-order screened exchange (SOSEX), single-excitation (SE) corrections, renormalized single-excitation (rSE) corrections and their combinations. Both the SE correction and the SOSEX contribution to the correlation energy significantly improve on the notorious tendency of EX + RPA to underbind. Surprisingly, activation energies obtained using EX + RPA based on a KS reference alone are remarkably accurate. RPA + SOSEX + rSE provides an equal level of accuracy for reaction as well as activation energies and overall gives the most balanced performance, because of which it can be applied to a wide range of systems and chemical reactions. (paper)

  13. Parameter Estimation for Partial Differential Equations by Collage-Based Numerical Approximation

    Directory of Open Access Journals (Sweden)

    Xiaoyan Deng

    2009-01-01

    into a minimization problem of a function of several variables after the partial differential equation is approximated by a differential dynamical system. Then numerical schemes for solving this minimization problem are proposed, including grid approximation and ant colony optimization. The proposed schemes are applied to a parameter estimation problem for the Belousov-Zhabotinskii equation, and the results show that the proposed approximation method is efficient for both linear and nonlinear partial differential equations with respect to unknown parameters. At worst, the presented method provides an excellent starting point for traditional inversion methods that must first select a good starting point.

  14. Sparse coding reveals greater functional connectivity in female brains during naturalistic emotional experience.

    Directory of Open Access Journals (Sweden)

    Yudan Ren

    Full Text Available Functional neuroimaging is widely used to examine changes in brain function associated with age, gender or neuropsychiatric conditions. FMRI (functional magnetic resonance imaging studies employ either laboratory-designed tasks that engage the brain with abstracted and repeated stimuli, or resting state paradigms with little behavioral constraint. Recently, novel neuroimaging paradigms using naturalistic stimuli are gaining increasing attraction, as they offer an ecologically-valid condition to approximate brain function in real life. Wider application of naturalistic paradigms in exploring individual differences in brain function, however, awaits further advances in statistical methods for modeling dynamic and complex dataset. Here, we developed a novel data-driven strategy that employs group sparse representation to assess gender differences in brain responses during naturalistic emotional experience. Comparing to independent component analysis (ICA, sparse coding algorithm considers the intrinsic sparsity of neural coding and thus could be more suitable in modeling dynamic whole-brain fMRI signals. An online dictionary learning and sparse coding algorithm was applied to the aggregated fMRI signals from both groups, which was subsequently factorized into a common time series signal dictionary matrix and the associated weight coefficient matrix. Our results demonstrate that group sparse representation can effectively identify gender differences in functional brain network during natural viewing, with improved sensitivity and reliability over ICA-based method. Group sparse representation hence offers a superior data-driven strategy for examining brain function during naturalistic conditions, with great potential for clinical application in neuropsychiatric disorders.

  15. A diffusion approximation based on renewal processes with applications to strongly biased run–tumble motion

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro

    2016-01-01

    We consider organisms which use a renewal strategy such as run–tumble when moving in space, for example to perform chemotaxis in chemical gradients. We derive a diffusion approximation for the motion, applying a central limit theorem due to Anscombe for renewal-reward processes; this theorem has ....... The proposed technique for obtaining diffusion approximations is conceptually and computationally simple, and applicable also when statistics of the motion is obtained empirically or through Monte Carlo simulation of the motion....

  16. SU-F-BRB-12: A Novel Haar Wavelet Based Approach to Deliver Non-Coplanar Intensity Modulated Radiotherapy Using Sparse Orthogonal Collimators

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D; Ruan, D; Low, D; Sheng, K [Deparment of Radiation Oncology, University of California Los Angeles, Los Angeles, CA (United States); O’Connor, D [Deparment of Mathematics, University of California Los Angeles, Los Angeles, CA (United States); Boucher, S [RadiaBeam Technologies, Santa Monica, CA (United States)

    2015-06-15

    Purpose: Existing efforts to replace complex multileaf collimator (MLC) by simple jaws for intensity modulated radiation therapy (IMRT) resulted in unacceptable compromise in plan quality and delivery efficiency. We introduce a novel fluence map segmentation method based on compressed sensing for plan delivery using a simplified sparse orthogonal collimator (SOC) on the 4π non-coplanar radiotherapy platform. Methods: 4π plans with varying prescription doses were first created by automatically selecting and optimizing 20 non-coplanar beams for 2 GBM, 2 head & neck, and 2 lung patients. To create deliverable 4π plans using SOC, which are two pairs of orthogonal collimators with 1 to 4 leaves in each collimator bank, a Haar Fluence Optimization (HFO) method was used to regulate the number of Haar wavelet coefficients while maximizing the dose fidelity to the ideal prescription. The plans were directly stratified utilizing the optimized Haar wavelet rectangular basis. A matching number of deliverable segments were stratified for the MLC-based plans. Results: Compared to the MLC-based 4π plans, the SOC-based 4π plans increased the average PTV dose homogeneity from 0.811 to 0.913. PTV D98 and D99 were improved by 3.53% and 5.60% of the corresponding prescription doses. The average mean and maximal OAR doses slightly increased by 0.57% and 2.57% of the prescription doses. The average number of segments ranged between 5 and 30 per beam. The collimator travel time to create the segments decreased with increasing leaf numbers in the SOC. The two and four leaf designs were 1.71 and 1.93 times more efficient, on average, than the single leaf design. Conclusion: The innovative dose domain optimization based on compressed sensing enables uncompromised 4π non-coplanar IMRT dose delivery using simple rectangular segments that are deliverable using a sparse orthogonal collimator, which only requires 8 to 16 leaves yet is unlimited in modulation resolution. This work is

  17. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  18. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  19. Rapid and Robust Cross-Correlation-Based Seismic Phase Identification Using an Approximate Nearest Neighbor Method

    Science.gov (United States)

    Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.

    2016-12-01

    The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.

  20. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  1. Making the most of sparse clinical data by using a predictive-model-based analysis, illustrated with a stavudine pharmacokinetic study.

    Science.gov (United States)

    Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B

    2001-02-01

    A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.

  2. Visual recognition and inference using dynamic overcomplete sparse learning.

    Science.gov (United States)

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  3. Evaluating the Predictive Power of Multivariate Tensor-based Morphometry in Alzheimers Disease Progression via Convex Fused Sparse Group Lasso.

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-21

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method 1 with novel MR-based multivariate morphometric surface map of the hippocampus 2 to predict future cognitive scores of patients. Previous work by Zhou et al. 1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al. 2 s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  4. Evaluating the predictive power of multivariate tensor-based morphometry in Alzheimer's disease progression via convex fused sparse group Lasso

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-01

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  5. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  6. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  7. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    Science.gov (United States)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  8. Neutrino-nucleus reaction rates based on the relativistic quasiparticle random phase approximation

    International Nuclear Information System (INIS)

    Paar, N.; Vretenar, D.; Marketin, T.; Ring, P.

    2008-01-01

    Neutrino-nucleus cross sections are described in a novel theoretical framework where the weak interaction of leptons with hadrons is expressed in the standard current-current form, the nuclear ground state is described in the relativistic Hartree-Bogoliubov model, and the relevant transitions to excited states are calculated in the relativistic quasiparticle random phase approximation. The model is employed in studies of neutrino-nucleus reactions in several test cases

  9. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    Science.gov (United States)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  10. Incremental and Enhanced Scanline-Based Segmentation Method for Surface Reconstruction of Sparse LiDAR Data

    Directory of Open Access Journals (Sweden)

    Weimin Wang

    2016-11-01

    Full Text Available The segmentation of point clouds is an important aspect of automated processing tasks such as semantic extraction. However, the sparsity and non-uniformity of the point clouds gathered by the popular 3D mobile LiDAR devices pose many challenges for existing segmentation methods. To improve the segmentation results of point clouds from mobile LiDAR devices, we propose an optimized segmentation method based on Scanline Continuity Constraint (SLCC in this work. Unlike conventional scanline-based segmentation methods, SLCC clusters scanlines using the continuity constraints in terms of the distance as well as the direction of two consecutive points. In addition, scanline clusters are agglomerated not only into primitive geometrical shapes but also irregular shapes. Another downside to existing segmentation methods is that they are not capable of incremental processing. This causes unnecessary memory and time consumption for applications that require frame-wise segmentation or when new point clouds are added. In order to address this, we propose an incremental scheme—the Incremental Recursive Segmentation (IRIS, that can be easily applied to any segmentation method. IRIS is achieved by combining the segments of newly added point clouds and the previously segmented results. Furthermore, as an example application, we construct a processing pipeline consisting of plane fitting and surface reconstruction using the segmentation results. Finally, we evaluate the proposed methods on three datasets acquired from a handheld Velodyne HDL-32E LiDAR device. The experimental results verify the efficiency of IRIS for any segmentation method and the advantages of SLCC for processing mobile LiDAR data.

  11. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    Science.gov (United States)

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.

  12. A New Method to Estimate Changes in Glacier Surface Elevation Based on Polynomial Fitting of Sparse ICESat—GLAS Footprints

    Directory of Open Access Journals (Sweden)

    Tianjin Huang

    2017-08-01

    Full Text Available We present in this paper a polynomial fitting method applicable to segments of footprints measured by the Geoscience Laser Altimeter System (GLAS to estimate glacier thickness change. Our modification makes the method applicable to complex topography, such as a large mountain glacier. After a full analysis of the planar fitting method to characterize errors of estimates due to complex topography, we developed an improved fitting method by adjusting a binary polynomial surface to local topography. The improved method and the planar fitting method were tested on the accumulation areas of the Naimona’nyi glacier and Yanong glacier on along-track facets with lengths of 1000 m, 1500 m, 2000 m, and 2500 m, respectively. The results show that the improved method gives more reliable estimates of changes in elevation than planar fitting. The improved method was also tested on Guliya glacier with a large and relatively flat area and the Chasku Muba glacier with very complex topography. The results in these test sites demonstrate that the improved method can give estimates of glacier thickness change on glaciers with a large area and a complex topography. Additionally, the improved method based on GLAS Data and Shuttle Radar Topography Mission-Digital Elevation Model (SRTM-DEM can give estimates of glacier thickness change from 2000 to 2008/2009, since it takes the 2000 SRTM-DEM as a reference, which is a longer period than 2004 to 2008/2009, when using the GLAS data only and the planar fitting method.

  13. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  14. A GALEX-BASED SEARCH FOR THE SPARSE YOUNG STELLAR POPULATION IN THE TAURUS-AURIGAE STAR FORMING REGION

    International Nuclear Information System (INIS)

    Gómez de Castro, Ana I.; Lopez-Santiago, Javier; López-Martínez, Fatima; Sánchez, Néstor; Sestito, Paola; Gestoso, Javier Yañez; De Castro, Elisa; Cornide, Manuel

    2015-01-01

    In this work, we identify 63 bona fide new candidates to T Tauri stars (TTSs) in the Taurus-Auriga region, using its ultraviolet excess as our baseline. The initial data set was defined from the GALEX all sky survey (AIS). The GALEX satellite obtained images in the near-ultraviolet (NUV) and far-ultraviolet (FUV) bands where TTSs show a prominent excess compared with main-sequence or giants stars. GALEX AIS surveyed the Taurus-Auriga molecular complex, as well as a fraction of the California Nebula and the Perseus complex; bright sources and dark clouds were avoided. The properties of TTSs in the ultraviolet (GALEX), optical (UCAC4), and infrared (2MASS) have been defined using the TTSs observed with the International Ultraviolet Explorer reference sample. The candidates were identified by means of a mixed ultraviolet-optical-infrared excess set of colors; we found that the FUV-NUV versus J–K color-color diagram is ideally suited for this purpose. From an initial sample of 163,313 bona fide NUV sources, a final list of 63 new candidates to TTSs in the region was produced. The search procedure has been validated by its ability to detect all known TTSs in the area surveyed: 31 TTSs. Also, we show that the weak-lined TTSs are located in a well-defined stripe in the FUV-NUV versus J–K diagram. Moreover, in this work, we provide a list of TTSs photometric standards for future GALEX-based studies of the young stellar population in star forming regions

  15. Generation of real-time global ionospheric map based on the global GNSS stations with only a sparse distribution

    Science.gov (United States)

    Li, Zishen; Wang, Ningbo; Li, Min; Zhou, Kai; Yuan, Yunbin; Yuan, Hong

    2017-04-01

    The Earth's ionosphere is part of the atmosphere stretching from an altitude of about 50 km to more than 1000 km. When the Global Navigation Satellite System (GNSS) signal emitted from a satellite travels through the ionosphere before reaches a receiver on or near the Earth surface, the GNSS signal is significantly delayed by the ionosphere and this delay bas been considered as one of the major errors in the GNSS measurement. The real-time global ionospheric map calculated from the real-time data obtained by global stations is an essential method for mitigating the ionospheric delay for real-time positioning. The generation of an accurate global ionospheric map generally depends on the global stations with dense distribution; however, the number of global stations that can produce the real-time data is very limited at present, which results that the generation of global ionospheric map with a high accuracy is very different when only using the current stations with real-time data. In view of this, a new approach is proposed for calculating the real-time global ionospheric map only based on the current stations with real-time data. This new approach is developed on the basis of the post-processing and the one-day predicted global ionospheric map from our research group. The performance of the proposed approach is tested by the current global stations with the real-time data and the test results are also compared with the IGS-released final global ionospheric map products.

  16. A GALEX-BASED SEARCH FOR THE SPARSE YOUNG STELLAR POPULATION IN THE TAURUS-AURIGAE STAR FORMING REGION

    Energy Technology Data Exchange (ETDEWEB)

    Gómez de Castro, Ana I.; Lopez-Santiago, Javier; López-Martínez, Fatima; Sánchez, Néstor; Sestito, Paola; Gestoso, Javier Yañez [AEGORA Research Group, Universidad Complutense de Madrid, Plaza de Ciencias 3, E-28040 Madrid (Spain); De Castro, Elisa; Cornide, Manuel [Fac. de CC. Físicas, Universidad Complutense de Madrid, Plaza de Ciencias 1, E-28040 Madrid (Spain)

    2015-02-01

    In this work, we identify 63 bona fide new candidates to T Tauri stars (TTSs) in the Taurus-Auriga region, using its ultraviolet excess as our baseline. The initial data set was defined from the GALEX all sky survey (AIS). The GALEX satellite obtained images in the near-ultraviolet (NUV) and far-ultraviolet (FUV) bands where TTSs show a prominent excess compared with main-sequence or giants stars. GALEX AIS surveyed the Taurus-Auriga molecular complex, as well as a fraction of the California Nebula and the Perseus complex; bright sources and dark clouds were avoided. The properties of TTSs in the ultraviolet (GALEX), optical (UCAC4), and infrared (2MASS) have been defined using the TTSs observed with the International Ultraviolet Explorer reference sample. The candidates were identified by means of a mixed ultraviolet-optical-infrared excess set of colors; we found that the FUV-NUV versus J–K color-color diagram is ideally suited for this purpose. From an initial sample of 163,313 bona fide NUV sources, a final list of 63 new candidates to TTSs in the region was produced. The search procedure has been validated by its ability to detect all known TTSs in the area surveyed: 31 TTSs. Also, we show that the weak-lined TTSs are located in a well-defined stripe in the FUV-NUV versus J–K diagram. Moreover, in this work, we provide a list of TTSs photometric standards for future GALEX-based studies of the young stellar population in star forming regions.

  17. A Diffusion Approximation Based on Renewal Processes with Applications to Strongly Biased Run-Tumble Motion.

    Science.gov (United States)

    Thygesen, Uffe Høgsbro

    2016-03-01

    We consider organisms which use a renewal strategy such as run-tumble when moving in space, for example to perform chemotaxis in chemical gradients. We derive a diffusion approximation for the motion, applying a central limit theorem due to Anscombe for renewal-reward processes; this theorem has not previously been applied in this context. Our results extend previous work, which has established the mean drift but not the diffusivity. For a classical model of tumble rates applied to chemotaxis, we find that the resulting chemotactic drift saturates to the swimming velocity of the organism when the chemical gradients grow increasingly steep. The dispersal becomes anisotropic in steep gradients, with larger dispersal across the gradient than along the gradient. In contrast to one-dimensional settings, strong bias increases dispersal. We next include Brownian rotation in the model and find that, in limit of high chemotactic sensitivity, the chemotactic drift is 64% of the swimming velocity, independent of the magnitude of the Brownian rotation. We finally derive characteristic timescales of the motion that can be used to assess whether the diffusion limit is justified in a given situation. The proposed technique for obtaining diffusion approximations is conceptually and computationally simple, and applicable also when statistics of the motion is obtained empirically or through Monte Carlo simulation of the motion.

  18. An approximate reasoning-based method for screening high-level-waste tanks for flammable gas

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    2000-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  19. An Approximate Reasoning-Based Method for Screening High-Level-Waste Tanks for Flammable Gas

    International Nuclear Information System (INIS)

    Eisenhawer, Stephen W.; Bott, Terry F.; Smith, Ronald E.

    2000-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  20. An approximate reasoning-based method for screening high-level-waste tanks for flammable gas

    Energy Technology Data Exchange (ETDEWEB)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    2000-06-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.

  1. An IR Sensor Based Smart System to Approximate Core Body Temperature.

    Science.gov (United States)

    Ray, Partha Pratim

    2017-08-01

    Herein demonstrated experiment studies two methods, namely convection and body resistance, to approximate human core body temperature. The proposed system is highly energy efficient that consumes only 165 mW power and runs on 5 VDC source. The implemented solution employs an IR thermographic sensor of industry grade along with AT Mega 328 breakout board. Ordinarily, the IR sensor is placed 1.5-30 cm away from human forehead (i.e., non-invasive) and measured the raw data in terms of skin and ambient temperature which is then converted using appropriate approximation formula to find out core body temperature. The raw data is plotted, visualized, and stored instantaneously in a local machine by means of two tools such as Makerplot, and JAVA-JAR. The test is performed when human object is in complete rest and after 10 min of walk. Achieved results are compared with the CoreTemp CM-210 sensor (by Terumo, Japan) which is calculated to be 0.7 °F different from the average value of BCT, obtained by the proposed IR sensor system. Upon a slight modification, the presented model can be connected with a remotely placed Internet of Things cloud service, which may be useful to inform and predict the user's core body temperature through a probabilistic view. It is also comprehended that such system can be useful as wearable device to be worn on at the hat attachable way.

  2. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  3. Sparse decompositions in 'incoherent' dictionaries

    DEFF Research Database (Denmark)

    Gribonval, R.; Nielsen, Morten

    2003-01-01

    a unique sparse representation in such a dictionary. In particular, it is proved that the result of Donoho and Huo, concerning the replacement of a combinatorial optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may...

  4. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    Science.gov (United States)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  5. A Comparison of Techniques for Approximating Full Image-Based Lighting

    DEFF Research Database (Denmark)

    Madsen, Claus B.; Laursen, Rune Elmgaard

    2006-01-01

    Light probes, or environment maps, are used extensively in computer graphics for visual effects involving rendering virtual objects into real scenes (Augment Reality). A light probe is a High Dynamic Range omni-directional image covering all directions on a sphere at some location. Each pixel...... in the light probe image measures the incident radiance at the light probe acquisition point. The figure above shows an example of a light probe image in the longitude-latitude mapping, (similar to an atlas mapping of the Earth). Using the light probe information a virtual object can be rendered with correct...... scene illumination and inserted into images of the scene with credible shading, reflections and shadows. Rendering virtual objects with light probe information is a very time consuming process. Therefore several techniques exist which attempt to approximate the light probe with a set of directional...

  6. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    Science.gov (United States)

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  7. Handling data redundancy in helical cone beam reconstruction with a cone-angle-based window function and its asymptotic approximation

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang

    2007-01-01

    A cone-angle-based window function is defined in this manuscript for image reconstruction using helical cone beam filtered backprojection (CB-FBP) algorithms. Rather than defining the window boundaries in a two-dimensional detector acquiring projection data for computed tomographic imaging, the cone-angle-based window function deals with data redundancy by selecting rays with the smallest cone angle relative to the reconstruction plane. To be computationally efficient, an asymptotic approximation of the cone-angle-based window function is also given and analyzed in this paper. The benefit of using such an asymptotic approximation also includes the avoidance of functional discontinuities that cause artifacts in reconstructed tomographic images. The cone-angle-based window function and its asymptotic approximation provide a way, equivalent to the Tam-Danielsson-window, for helical CB-FBP reconstruction algorithms to deal with data redundancy, regardless of where the helical pitch is constant or dynamically variable during a scan. By taking the cone-parallel geometry as an example, a computer simulation study is conducted to evaluate the proposed window function and its asymptotic approximation for helical CB-FBP reconstruction algorithm to handle data redundancy. The computer simulated Forbild head and thorax phantoms are utilized in the performance evaluation, showing that the proposed cone-angle-based window function and its asymptotic approximation can deal with data redundancy very well in cone beam image reconstruction from projection data acquired along helical source trajectories. Moreover, a numerical study carried out in this paper reveals that the proposed cone-angle-based window function is actually equivalent to the Tam-Danielsson-window, and rigorous mathematical proofs are being investigated

  8. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  9. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit; Pfluger, Dirk; Murarasu, Alin; Jacob, Riko

    2012-01-01

    performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations

  10. Sparse Frequency Waveform Design for Radar-Embedded Communication

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2016-01-01

    Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.

  11. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  12. Homogeneous hierarchies: A discrete analogue to the wavelet-based multiresolution approximation

    Energy Technology Data Exchange (ETDEWEB)

    Mirkin, B. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    A correspondence between discrete binary hierarchies and some orthonormal bases of the n-dimensional Euclidean space can be applied to such problems as clustering, ordering, identifying/testing in very large data bases, or multiresolution image/signal processing. The latter issue is considered in the paper. The binary hierarchy based multiresolution theory is expected to lead to effective methods for data processing because of relaxing the regularity restrictions of the classical theory.

  13. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  14. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  15. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  16. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  17. An approximate-reasoning-based method for screening high-level waste tanks for flammable gas

    International Nuclear Information System (INIS)

    Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.

    1998-01-01

    The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts

  18. Spline Approximation-Based Optimization of Multi-component Disperse Reinforced Composites

    Directory of Open Access Journals (Sweden)

    Yu. I. Dimitrienko

    2015-01-01

    Full Text Available The paper suggests an algorithm for solving the problems of optimal design of multicomponent disperse-reinforced composite materials, which properties are defined by filler concentrations and are independent of their shape. It formulates the problem of conditional optimization of a composite with restrictions on its effective parameters - the elasticity modulus, tension and compression strengths, and heat-conductivity coefficient with minimized composite density. The effective characteristics of a composite were computed by finite-element solving the auxiliary local problems of elasticity and heat-conductivity theories appearing when the asymptotic averaging method is applied.The algorithm suggested to solve the optimization problem includes the following main stages:1 finding a set of solutions for direct problem to calculate the effective characteristics;2 constructing the curves of effective characteristics versus filler concentrations by means of approximating functions, which are offered for use as a thin plate spline with smoothing;3 constructing a set of points to satisfy restrictions and a boundary of the point set to satisfy restrictions obtaining, as a result, a contour which can be parameterized;4 defining a global density minimum over the contour through psi-transformation.A numerical example of solving the optimization problem was given for a dispersereinforced composite with two types of fillers being hollow microspheres: glass and phenolic. It was shown that the suggested algorithm allows us to find optimal filler concentrations efficiently enough.

  19. Radioactive waste management integrated data base: a bibliography. [Approximately 1100 references

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, C.A.; Garland, P.A.

    1980-09-01

    The purpose of this indexed bibliography is to organize and collect the literature references on waste generation and treatment, characteristics, inventories, and costs. The references were captured into a searchable information file, and the information file was sorted, indexed, and printed for this bibliography. A completion of approximately 1100 references to nuclear waste management, the first of a series, is completed. Each reference is categorized by waste origin (commercial, defense, institutional, and foreign) and by subject area: (1) high-level wastes, (2) low-level wastes, (3) TRU wastes, (4) airborne wastes, (5) remedial action (formerly utilized sites, surplus facilities, and mill tailings), (6) isolation, (7) transportation, (8) spent fuel, (9) fuel cycle centers, and (10) a general category that covers nonspecific wastes. Five indexes are provided to assist the user in locating documents of interest: author, author affiliation (corporate authority), subject category, keyword, and permuted title. Machine (computer) searches of these indexes can be made specifying multiple constraints if so desired. This bibliography will be periodically updated as new information becomes available. In addition to being used in searches for specific data, the information file can also be used for resource document collection, names and addresses of contacts, and identification of potential sources of data.

  20. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    Science.gov (United States)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  1. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  2. Sparse reconstruction using distribution agnostic bayesian matching pursuit

    KAUST Repository

    Masood, Mudassir

    2013-11-01

    A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. The method utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean-square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  3. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sajjad

    2014-02-01

    Full Text Available Visual sensor networks (VSNs usually generate a low-resolution (LR frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP. This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  4. Simulation-based evaluation of the performance of the F test in a linear multilevel model setting with sparseness at the level of the primary unit.

    Science.gov (United States)

    Bruyndonckx, Robin; Aerts, Marc; Hens, Niel

    2016-09-01

    In a linear multilevel model, significance of all fixed effects can be determined using F tests under maximum likelihood (ML) or restricted maximum likelihood (REML). In this paper, we demonstrate that in the presence of primary unit sparseness, the performance of the F test under both REML and ML is rather poor. Using simulations based on the structure of a data example on ceftriaxone consumption in hospitalized children, we studied variability, type I error rate and power in scenarios with a varying number of secondary units within the primary units. In general, the variability in the estimates for the effect of the primary unit decreased as the number of secondary units increased. In the presence of singletons (i.e., only one secondary unit within a primary unit), REML consistently outperformed ML, although even under REML the performance of the F test was found inadequate. When modeling the primary unit as a random effect, the power was lower while the type I error rate was unstable. The options of dropping, regrouping, or splitting the singletons could solve either the problem of a high type I error rate or a low power, while worsening the other. The permutation test appeared to be a valid alternative as it outperformed the F test, especially under REML. We conclude that in the presence of singletons, one should be careful in using the F test to determine the significance of the fixed effects, and propose the permutation test (under REML) as an alternative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  6. Deep feature representation with stacked sparse auto-encoder and convolutional neural network for hyperspectral imaging-based detection of cucumber defects

    Science.gov (United States)

    It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...

  7. Decoherence and Noise in Spin-based Solid State Quantum Computers. Approximation-Free Numerical Simulations

    National Research Council Canada - National Science Library

    Harmon, Bruce N; Dobrovitski, Viatcheslav V

    2007-01-01

    ...) have also been developed and applied. Most recently, specific strategies for quantum control have been investigated for realistic systems in order to extend the coherence times for spin-based quantum computing implementations...

  8. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J.; Zhao, Hongyu

    2015-01-01

    predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths

  9. MIMO feed-forward design in wafer scanners using a gradient approximation-based algorithm

    NARCIS (Netherlands)

    Heertjes, M.F.; Hennekens, D.W.T.; Steinbuch, M.

    2010-01-01

    An experimental demonstration is given of a data-based multi-input multi-output (MIMO) feed-forward control design applied to the motion systems of a wafer scanner. Atop a nominal single-input single-output (SISO) feed-forward controller, a MIMO controller is designed having a finite impulse

  10. Efficient model checking for duration calculus based on branching-time approximations

    DEFF Research Database (Denmark)

    Fränzle, Martin; Hansen, Michael Reichhardt

    2008-01-01

    Duration Calculus (abbreviated to DC) is an interval-based, metric-time temporal logic designed for reasoning about embedded real-time systems at a high level of abstraction. But the complexity of model checking any decidable fragment featuring both negation and chop, DC's only modality, is non...

  11. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  12. Scale and the Evolutionarily Based Approximate Number System: An Exploratory Study

    Science.gov (United States)

    Delgado, Cesar; Jones, M. Gail; You, Hye Sun; Robertson, Laura; Chesnutt, Katherine; Halberda, Justin

    2017-01-01

    Crosscutting concepts such as "scale, proportion, and quantity" are recognised by U.S. science standards as a potential vehicle for students to integrate their scientific and mathematical knowledge; yet, U.S. students and adults trail their international peers in scale and measurement estimation. Culturally based knowledge of scale such…

  13. Neural network approximation of nonlinearity in laser nano-metrology system based on TLMI

    Energy Technology Data Exchange (ETDEWEB)

    Olyaee, Saeed; Hamedi, Samaneh, E-mail: s_olyaee@srttu.edu [Nano-photonics and Optoelectronics Research Laboratory (NORLab), Faculty of Electrical and Computer Engineering, Shahid Rajaee Teacher Training University (SRTTU), Lavizan, 16788, Tehran (Iran, Islamic Republic of)

    2011-02-01

    In this paper, an approach based on neural network (NN) for nonlinearity modeling in a nano-metrology system using three-longitudinal-mode laser heterodyne interferometer (TLMI) for length and displacement measurements is presented. We model nonlinearity errors that arise from elliptically and non-orthogonally polarized laser beams, rotational error in the alignment of laser head with respect to the polarizing beam splitter, rotational error in the alignment of the mixing polarizer, and unequal transmission coefficients in the polarizing beam splitter. Here we use a neural network algorithm based on the multi-layer perceptron (MLP) network. The simulation results show that multi-layer feed forward perceptron network is successfully applicable to real noisy interferometer signals.

  14. Neural network approximation of nonlinearity in laser nano-metrology system based on TLMI

    International Nuclear Information System (INIS)

    Olyaee, Saeed; Hamedi, Samaneh

    2011-01-01

    In this paper, an approach based on neural network (NN) for nonlinearity modeling in a nano-metrology system using three-longitudinal-mode laser heterodyne interferometer (TLMI) for length and displacement measurements is presented. We model nonlinearity errors that arise from elliptically and non-orthogonally polarized laser beams, rotational error in the alignment of laser head with respect to the polarizing beam splitter, rotational error in the alignment of the mixing polarizer, and unequal transmission coefficients in the polarizing beam splitter. Here we use a neural network algorithm based on the multi-layer perceptron (MLP) network. The simulation results show that multi-layer feed forward perceptron network is successfully applicable to real noisy interferometer signals.

  15. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  16. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  17. Finite element approximations of the stokes flow problem based upon various variational principles

    International Nuclear Information System (INIS)

    Franca, L.P.; Hughers, T.J.R.; Stenberg, R.

    1989-05-01

    Finite element methods are constructed by adding to the usual Galerkin method terms that are mesh-dependent least-squares forms of the Euler-Lagrange equations. The methods are consistent and possess additional stability compared to the Galerkin method. Finite element interpolations, which are unstable in the Galerkin approach, are now convergent. The methodology is applied to the velocity-pressure formulation, a.k.a., Herrmann's formulation, to the stress-velocity formulation, a.k.a., Hellinger-Reissner's formulation and to a new formulation based on augmented stress, pressure and velocity [pt

  18. Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation

    Directory of Open Access Journals (Sweden)

    Carlos Fernández-Llatas

    2013-10-01

    Full Text Available Born in the early nineteen nineties, evidence-based medicine (EBM is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized.

  19. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  20. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  1. Diophantine approximation

    CERN Document Server

    Schmidt, Wolfgang M

    1980-01-01

    "In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)

  2. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaofeng, E-mail: xfyang@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Zhao, Jia, E-mail: zhao62@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 (United States); Wang, Qi, E-mail: qwang@math.sc.edu [Department of Mathematics, University of South Carolina, Columbia, SC 29208 (United States); Beijing Computational Science Research Center, Beijing (China); School of Materials Science and Engineering, Nankai University, Tianjin (China)

    2017-03-15

    The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg–Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the “Invariant Energy Quadratization” (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.

  3. Is multidetector CT-based bone mineral density and quantitative bone microstructure assessment at the spine still feasible using ultra-low tube current and sparse sampling?

    Energy Technology Data Exchange (ETDEWEB)

    Mei, Kai; Kopp, Felix K.; Schwaiger, Benedikt J.; Gersing, Alexandra S.; Sauter, Andreas; Muenzel, Daniela; Rummeny, Ernst J. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Bippus, Rolf [Research Laboratories, Philips GmbH Innovative Technologies, Hamburg (Germany); Koehler, Thomas [Research Laboratories, Philips GmbH Innovative Technologies, Hamburg (Germany); Technische Universitaet Muenchen, TUM Institute for Advanced Studies, Garching (Germany); Fehringer, Andreas [Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Pfeiffer, Franz [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Technische Universitaet Muenchen, TUM Institute for Advanced Studies, Garching (Germany); Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Kirschke, Jan S. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Section of Diagnostic and Interventional Neuroradiology, Munich (Germany); Noel, Peter B. [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Technische Universitaet Muenchen, Lehrstuhl fuer Biomedizinische Physik, Garching (Germany); Baum, Thomas [Klinikum rechts der Isar, Technische Universitaet Muenchen, Department of Diagnostic and Interventional Radiology, Munich (Germany); Klinikum rechts der Isar, Technische Universitaet Muenchen, Section of Diagnostic and Interventional Neuroradiology, Munich (Germany)

    2017-12-15

    Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. (orig.)

  4. Is multidetector CT-based bone mineral density and quantitative bone microstructure assessment at the spine still feasible using ultra-low tube current and sparse sampling?

    International Nuclear Information System (INIS)

    Mei, Kai; Kopp, Felix K.; Schwaiger, Benedikt J.; Gersing, Alexandra S.; Sauter, Andreas; Muenzel, Daniela; Rummeny, Ernst J.; Bippus, Rolf; Koehler, Thomas; Fehringer, Andreas; Pfeiffer, Franz; Kirschke, Jan S.; Noel, Peter B.; Baum, Thomas

    2017-01-01

    Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. (orig.)

  5. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm

  6. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    KAUST Repository

    Sicat, Ronell Barrera; Kruger, Jens; Moller, Torsten; Hadwiger, Markus

    2014-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined

  7. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  8. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    Science.gov (United States)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  9. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  10. Electromagnetic Formation Flight (EMFF) for Sparse Aperture Arrays

    Science.gov (United States)

    Kwon, Daniel W.; Miller, David W.; Sedwick, Raymond J.

    2004-01-01

    Traditional methods of actuating spacecraft in sparse aperture arrays use propellant as a reaction mass. For formation flying systems, propellant becomes a critical consumable which can be quickly exhausted while maintaining relative orientation. Additional problems posed by propellant include optical contamination, plume impingement, thermal emission, and vibration excitation. For these missions where control of relative degrees of freedom is important, we consider using a system of electromagnets, in concert with reaction wheels, to replace the consumables. Electromagnetic Formation Flight sparse apertures, powered by solar energy, are designed differently from traditional propulsion systems, which are based on V. This paper investigates the design of sparse apertures both inside and outside the Earth's gravity field.

  11. Recursive nearest neighbor search in a sparse and multiscale domain for comparing audio signals

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Daudet, Laurent

    2011-01-01

    We investigate recursive nearest neighbor search in a sparse domain at the scale of audio signals. Essentially, to approximate the cosine distance between the signals we make pairwise comparisons between the elements of localized sparse models built from large and redundant multiscale dictionaries...

  12. Mathematical analysis of the boundary-integral based electrostatics estimation approximation for molecular solvation: exact results for spherical inclusions.

    Science.gov (United States)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2011-09-28

    We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics

  13. A quadratic approximation-based algorithm for the solution of multiparametric mixed-integer nonlinear programming problems

    KAUST Repository

    Domínguez, Luis F.

    2012-06-25

    An algorithm for the solution of convex multiparametric mixed-integer nonlinear programming problems arising in process engineering problems under uncertainty is introduced. The proposed algorithm iterates between a multiparametric nonlinear programming subproblem and a mixed-integer nonlinear programming subproblem to provide a series of parametric upper and lower bounds. The primal subproblem is formulated by fixing the integer variables and solved through a series of multiparametric quadratic programming (mp-QP) problems based on quadratic approximations of the objective function, while the deterministic master subproblem is formulated so as to provide feasible integer solutions for the next primal subproblem. To reduce the computational effort when infeasibilities are encountered at the vertices of the critical regions (CRs) generated by the primal subproblem, a simplicial approximation approach is used to obtain CRs that are feasible at each of their vertices. The algorithm terminates when there does not exist an integer solution that is better than the one previously used by the primal problem. Through a series of examples, the proposed algorithm is compared with a multiparametric mixed-integer outer approximation (mp-MIOA) algorithm to demonstrate its computational advantages. © 2012 American Institute of Chemical Engineers (AIChE).

  14. Object tracking by occlusion detection via structured sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2013-06-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object\\'s track. This is the case when significant occlusion occurs. To accommodate for non-sparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our tracker consistently outperforms the state-of-the-art. © 2013 IEEE.

  15. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-01-01

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  16. Sparse Representations of Hyperspectral Images

    KAUST Repository

    Swanson, Robin J.

    2015-11-23

    Hyperspectral image data has long been an important tool for many areas of sci- ence. The addition of spectral data yields significant improvements in areas such as object and image classification, chemical and mineral composition detection, and astronomy. Traditional capture methods for hyperspectral data often require each wavelength to be captured individually, or by sacrificing spatial resolution. Recently there have been significant improvements in snapshot hyperspectral captures using, in particular, compressed sensing methods. As we move to a compressed sensing image formation model the need for strong image priors to shape our reconstruction, as well as sparse basis become more important. Here we compare several several methods for representing hyperspectral images including learned three dimensional dictionaries, sparse convolutional coding, and decomposable nonlocal tensor dictionaries. Addi- tionally, we further explore their parameter space to identify which parameters provide the most faithful and sparse representations.

  17. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  18. Detection of Pitting in Gears Using a Deep Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Yongzhi Qu

    2017-05-01

    Full Text Available In this paper; a new method for gear pitting fault detection is presented. The presented method is developed based on a deep sparse autoencoder. The method integrates dictionary learning in sparse coding into a stacked autoencoder network. Sparse coding with dictionary learning is viewed as an adaptive feature extraction method for machinery fault diagnosis. An autoencoder is an unsupervised machine learning technique. A stacked autoencoder network with multiple hidden layers is considered to be a deep learning network. The presented method uses a stacked autoencoder network to perform the dictionary learning in sparse coding and extract features from raw vibration data automatically. These features are then used to perform gear pitting fault detection. The presented method is validated with vibration data collected from gear tests with pitting faults in a gearbox test rig and compared with an existing deep learning-based approach.

  19. Image understanding using sparse representations

    CERN Document Server

    Thiagarajan, Jayaraman J; Turaga, Pavan; Spanias, Andreas

    2014-01-01

    Image understanding has been playing an increasingly crucial role in several inverse problems and computer vision. Sparse models form an important component in image understanding, since they emulate the activity of neural receptors in the primary visual cortex of the human brain. Sparse methods have been utilized in several learning problems because of their ability to provide parsimonious, interpretable, and efficient models. Exploiting the sparsity of natural signals has led to advances in several application areas including image compression, denoising, inpainting, compressed sensing, blin

  20. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  1. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  2. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  3. Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions.

    Science.gov (United States)

    Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin

    2009-03-01

    Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.

  4. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  5. Aharonov-Casher effect and quantum transport in graphene based nano rings: A self-consistent Born approximation

    Science.gov (United States)

    Ghaderzadeh, A.; Rahbari, S. H. Ebrahimnazhad; Phirouznia, A.

    2018-03-01

    In this study, Rashba coupling induced Aharonov-Casher effect in a graphene based nano ring is investigated theoretically. The graphene based nano ring is considered as a central device connected to semi-infinite graphene nano ribbons. In the presence of the Rashba spin-orbit interaction, two armchair shaped edge nano ribbons are considered as semi-infinite leads. The non-equilibrium Green's function approach is utilized to obtain the quantum transport characteristics of the system. The relaxation and dephasing mechanisms within the self-consistent Born approximation is scrutinized. The Lopez-Sancho method is also applied to obtain the self-energy of the leads. We unveil that the non-equilibrium current of the system possesses measurable Aharonov-Casher oscillations with respect to the Rashba coupling strength. In addition, we have observed the same oscillations in dilute impurity regimes in which amplitude of the oscillations is shown to be suppressed as a result of the relaxations.

  6. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  7. In-silico oncology: an approximate model of brain tumor mass effect based on directly manipulated free form deformation

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Stefan; Mang, Andreas; Toma, Alina; Buzug, Thorsten M. [University of Luebeck (Germany). Institute of Medical Engineering

    2010-12-15

    The present work introduces a novel method for approximating mass effect of primary brain tumors. The spatio-temporal dynamics of cancerous cells are modeled by means of a deterministic reaction-diffusion equation. Diffusion tensor information obtained from a probabilistic diffusion tensor imaging atlas is incorporated into the model to simulate anisotropic diffusion of cancerous cells. To account for the expansive nature of the tumor, the computed net cell density of malignant cells is linked to a parametric deformation model. This mass effect model is based on the so-called directly manipulated free form deformation. Spatial correspondence between two successive simulation steps is established by tracking landmarks, which are attached to the boundary of the gross tumor volume. The movement of these landmarks is used to compute the new configuration of the control points and, hence, determines the resulting deformation. To prevent a deformation of rigid structures (i.e. the skull), fixed shielding landmarks are introduced. In a refinement step, an adaptive landmark scheme ensures a dense sampling of the tumor isosurface, which in turn allows for an appropriate representation of the tumor shape. The influence of different parameters on the model is demonstrated by a set of simulations. Additionally, simulation results are qualitatively compared to an exemplary set of clinical magnetic resonance images of patients diagnosed with high-grade glioma. Careful visual inspection of the results demonstrates the potential of the implemented model and provides first evidence that the computed approximation of tumor mass effect is sensible. The shape of diffusive brain tumors (glioblastoma multiforme) can be recovered and approximately matches the observations in real clinical data. (orig.)

  8. Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes

    Science.gov (United States)

    Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca

    2018-01-01

    Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.

  9. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  10. Sparse Classification - Methods & Applications

    DEFF Research Database (Denmark)

    Einarsson, Gudmundur

    for analysing such data carry the potential to revolutionize tasks such as medical diagnostics where often decisions need to be based on only a few high-dimensional observations. This explosion in data dimensionality has sparked the development of novel statistical methods. In contrast, classical statistics...

  11. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  12. Articular surface approximation in equivalent spatial parallel mechanism models of the human knee joint: an experiment-based assessment.

    Science.gov (United States)

    Ottoboni, A; Parenti-Castelli, V; Sancisi, N; Belvedere, C; Leardini, A

    2010-01-01

    In-depth comprehension of human joint function requires complex mathematical models, which are particularly necessary in applications of prosthesis design and surgical planning. Kinematic models of the knee joint, based on one-degree-of-freedom equivalent mechanisms, have been proposed to replicate the passive relative motion between the femur and tibia, i.e., the joint motion in virtually unloaded conditions. In the mechanisms analysed in the present work, some fibres within the anterior and posterior cruciate and medial collateral ligaments were taken as isometric during passive motion, and articulating surfaces as rigid. The shapes of these surfaces were described with increasing anatomical accuracy, i.e. from planar to spherical and general geometry, which consequently led to models with increasing complexity. Quantitative comparison of the results obtained from three models, featuring an increasingly accurate approximation of the articulating surfaces, was performed by using experimental measurements of joint motion and anatomical structure geometries of four lower-limb specimens. Corresponding computer simulations of joint motion were obtained from the different models. The results revealed a good replication of the original experimental motion by all models, although the simulations also showed that a limit exists beyond which description of the knee passive motion does not benefit considerably from further approximation of the articular surfaces.

  13. Abnormal Event Detection Using Local Sparse Representation

    DEFF Research Database (Denmark)

    Ren, Huamin; Moeslund, Thomas B.

    2014-01-01

    We propose to detect abnormal events via a sparse subspace clustering algorithm. Unlike most existing approaches, which search for optimized normal bases and detect abnormality based on least square error or reconstruction error from the learned normal patterns, we propose an abnormality measurem...... is found that satisfies: the distance between its local space and the normal space is large. We evaluate our method on two public benchmark datasets: UCSD and Subway Entrance datasets. The comparison to the state-of-the-art methods validate our method's effectiveness....

  14. Structure-aware Local Sparse Coding for Visual Tracking

    KAUST Repository

    Qi, Yuankai

    2018-01-24

    Sparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous stateof- the-art methods.

  15. Direct Adaptive Tracking Control for a Class of Pure-Feedback Stochastic Nonlinear Systems Based on Fuzzy-Approximation

    Directory of Open Access Journals (Sweden)

    Huanqing Wang

    2014-01-01

    Full Text Available The problem of fuzzy-based direct adaptive tracking control is considered for a class of pure-feedback stochastic nonlinear systems. During the controller design, fuzzy logic systems are used to approximate the packaged unknown nonlinearities, and then a novel direct adaptive controller is constructed via backstepping technique. It is shown that the proposed controller guarantees that all the signals in the closed-loop system are bounded in probability and the tracking error eventually converges to a small neighborhood around the origin in the sense of mean quartic value. The main advantages lie in that the proposed controller structure is simpler and only one adaptive parameter needs to be updated online. Simulation results are used to illustrate the effectiveness of the proposed approach.

  16. HIGHLY PRECISE APPROXIMATION OF FREE SURFACE GREEN FUNCTION AND ITS HIGH ORDER DERIVATIVES BASED ON REFINED SUBDOMAINS

    Directory of Open Access Journals (Sweden)

    Jiameng Wu

    2018-01-01

    Full Text Available The infinite depth free surface Green function (GF and its high order derivatives for diffraction and radiation of water waves are considered. Especially second order derivatives are essential requirements in high-order panel method. In this paper, concerning the classical representation, composed of a semi-infinite integral involving a Bessel function and a Cauchy singularity, not only the GF and its first order derivatives but also second order derivatives are derived from four kinds of analytical series expansion and refined division of whole calculation domain. The approximations of special functions, particularly the hypergeometric function and the algorithmic applicability with different subdomains are implemented. As a result, the computation accuracy can reach 10-9 in whole domain compared with conventional methods based on direct numerical integration. Furthermore, numerical efficiency is almost equivalent to that with the classical method.

  17. Sparse random matrices: The eigenvalue spectrum revisited

    International Nuclear Information System (INIS)

    Semerjian, Guilhem; Cugliandolo, Leticia F.

    2003-08-01

    We revisit the derivation of the density of states of sparse random matrices. We derive a recursion relation that allows one to compute the spectrum of the matrix of incidence for finite trees that determines completely the low concentration limit. Using the iterative scheme introduced by Biroli and Monasson [J. Phys. A 32, L255 (1999)] we find an approximate expression for the density of states expected to hold exactly in the opposite limit of large but finite concentration. The combination of the two methods yields a very simple geometric interpretation of the tails of the spectrum. We test the analytic results with numerical simulations and we suggest an indirect numerical method to explore the tails of the spectrum. (author)

  18. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  19. Occlusion detection via structured sparse learning for robust object tracking

    KAUST Repository

    Zhang, Tianzhu

    2014-01-01

    Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios, these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object’s track. This is the case when significant occlusion occurs. To accommodate for nonsparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Extensive experimental results show that our proposed tracker consistently outperforms the state-of-the-art trackers.

  20. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  1. Resource optimised reconfigurable modular parallel pipelined stochastic approximation-based self-tuning regulator architecture with reduced latency

    Directory of Open Access Journals (Sweden)

    Varghese Mathew Vaidyan

    2015-09-01

    Full Text Available Present self-tuning regulator architectures based on recursive least-square estimation are computationally expensive and require large amount of resources and time in generating the first control signal due to computational bottlenecks imposed by the calculations involved in estimation stage, different stages of matrix multiplications and the number of intermediate variables at each iteration and precludes its use in applications that have fast required response times and those which run on embedded computing platforms with low-power or low-cost requirements with constraints on resource usage. A salient feature of this study is that a new modular parallel pipelined stochastic approximation-based self-tuning regulator architecture which reduces the time required to generate the first control signal, reduces resource usage and reduces the number of intermediate variables is proposed. Fast matrix multiplication, pipelining and high-speed arithmetic function implementations were used for improving the performance. Results of implementation demonstrate that the proposed architecture has an improvement in control signal generation time by 38% and reduction in resource usage by 41% in terms of multipliers and 44.4% in terms of adders compared with the best existing related work, opening up new possibilities for the application of online embedded self-tuning regulators.

  2. The use of approximation formulae in calculations of acid-base equilibria-II: salts of mono- and diprotic acids.

    Science.gov (United States)

    Narasaki, H

    1980-02-01

    The pH of solutions of salts of mono- and diprotic acids is calculated by use of approximation formulae and the theoretically exact equations. The regions for useful application of the approximation formulae (error monoprotic acids, areas are symmetrically equal to those of the acids. For salts of diprotic acids the ranges generally depend on K(2)/K(1).

  3. Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification

    KAUST Repository

    Winokur, J.

    2015-12-19

    We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.

  4. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory

    Directory of Open Access Journals (Sweden)

    Bin Zhang

    2017-06-01

    Full Text Available By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  5. Design method for low order two-degree-of-freedom controller based on Pade approximation of the denominator series expansion

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1999-01-01

    Having advantages of setting independently feedback characteristics such as disturbance rejection specification and reference response characteristics, two-degree-of-freedom (2DOF) control is widely utilized to improve the control performance. The ordinary design method such as model matching usually derives high-ordered feedforward element of 2DOF controller. In this paper, we propose a new design method for low order feedforward element which is based on Pade approximation of the denominator series expansion. The features of the proposed method are as follows: (1) it is suited to realize reference response characteristics in low frequency region, (2) the order of the feedforward element can be selected apart from the feedback element. These are essential to the 2DOF controller design. With this method, 2DOF reactor power controller is designed and its control performance is evaluated by numerical simulation with reactor dynamics model. For this evaluation, it is confirmed that the controller designed by the proposed method possesses equivalent control characteristics to the controller by the ordinary model matching method. (author)

  6. Measurements of Atomic Rayleigh Scattering Cross-Sections: A New Approach Based on Solid Angle Approximation and Geometrical Efficiency

    Science.gov (United States)

    Rao, D. V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Seltzer, S. M.; Hubbell, J. H.; Cesareo, R.; Brunetti, A.; Gigante, G. E.

    Atomic Rayleigh scattering cross-sections for low, medium and high Z atoms are measured in vacuum using X-ray tube with a secondary target as an excitation source instead of radioisotopes. Monoenergetic Kα radiation emitted from the secondary target and monoenergetic radiation produced using two secondary targets with filters coupled to an X-ray tube are compared. The Kα radiation from the second target of the system is used to excite the sample. The background has been reduced considerably and the monochromacy is improved. Elastic scattering of Kα X-ray line energies of the secondary target by the sample is recorded with Hp Ge and Si (Li) detectors. A new approach is developed to estimate the solid angle approximation and geometrical efficiency for a system with experimental arrangement using X-ray tube and secondary target. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. The efficiency is larger because the X-ray fluorescent source acts as a converter. Experimental results based on this system are compared with theoretical estimates and good agreement is observed in between them.

  7. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory.

    Science.gov (United States)

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-06-15

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  8. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  9. A vehicle stability control strategy with adaptive neural network sliding mode theory based on system uncertainty approximation

    Science.gov (United States)

    Ji, Xuewu; He, Xiangkun; Lv, Chen; Liu, Yahui; Wu, Jian

    2018-06-01

    Modelling uncertainty, parameter variation and unknown external disturbance are the major concerns in the development of an advanced controller for vehicle stability at the limits of handling. Sliding mode control (SMC) method has proved to be robust against parameter variation and unknown external disturbance with satisfactory tracking performance. But modelling uncertainty, such as errors caused in model simplification, is inevitable in model-based controller design, resulting in lowered control quality. The adaptive radial basis function network (ARBFN) can effectively improve the control performance against large system uncertainty by learning to approximate arbitrary nonlinear functions and ensure the global asymptotic stability of the closed-loop system. In this paper, a novel vehicle dynamics stability control strategy is proposed using the adaptive radial basis function network sliding mode control (ARBFN-SMC) to learn system uncertainty and eliminate its adverse effects. This strategy adopts a hierarchical control structure which consists of reference model layer, yaw moment control layer, braking torque allocation layer and executive layer. Co-simulation using MATLAB/Simulink and AMESim is conducted on a verified 15-DOF nonlinear vehicle system model with the integrated-electro-hydraulic brake system (I-EHB) actuator in a Sine With Dwell manoeuvre. The simulation results show that ARBFN-SMC scheme exhibits superior stability and tracking performance in different running conditions compared with SMC scheme.

  10. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. High Resolution DOA Estimation Based on Sparse Reconstruction of Delay Correlation Function%基于延时相关的稀疏恢复高分辨来波方位估计

    Institute of Scientific and Technical Information of China (English)

    谢前朋; 王伦文

    2016-01-01

    针对 MUSIC 算法和基于四阶累积量的 MUSIC-like 算法在高斯色噪声背景下测向精度不理想的问题,提出基于延时相关的稀疏恢复高分辨来波方位估计算法。该算法利用蕴含在接收数据延时相关函数中的角度信息,采用所有阵元接收数据的延时相关函数构造新的阵列输出矩阵,进而构造新的协方差矩阵,并进行奇异值分解,建立稀疏表示模型,使用l1范数法对稀疏模型进行求解实现色噪声环境下高分辨 DOA估计。仿真实验表明,基于延时相关的稀疏表示模型的测向分辨率好于基于传统子空间的 MUSIC 算法和基于四阶累积量的 MUSIC-like算法,能降低协方差构造的复杂度,增强色噪声抑制能力。%In order to solve theproblem of angular precision of DOA estimation of MUSIC algorithm and MU-SIC-like algorithm in the background of Gaussian color noise,an approach for high resolution DOA estimation based on sparse reconstruction of the delay correlation function of the received data preprocessing was proposed. The proposed algorithm made full use of the information of the direction of arrival contained in the delay correla-tion function and used the delay correlation function of all the received data to construct a new array matrix,then through the singular value decomposition to construct the sparse representation model and using l1 norm algo-rithm to realize high resolution DOA estimation in the background of Gaussian color noise.The proposed algo-rithm had lower covariance computational complexity and good ability to restrain Gaussian color noise.Experi-mental results showed that the resolution DOA estimation of sparse representation model based on time delay correlation function had a better performance than MUSIC algorithm based on the traditional subspace and MU-SIC-like algorithm based on fourth-order cumulant.

  12. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  13. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  14. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  15. Estimation of geological formation thermal conductivity by using stochastic approximation method based on well-log temperature data

    International Nuclear Information System (INIS)

    Cheng, Wen-Long; Huang, Yong-Hua; Liu, Na; Ma, Ran

    2012-01-01

    Thermal conductivity is a key parameter for evaluating wellbore heat losses which plays an important role in determining the efficiency of steam injection processes. In this study, an unsteady formation heat-transfer model was established and a cost-effective in situ method by using stochastic approximation method based on well-log temperature data was presented. The proposed method was able to estimate the thermal conductivity and the volumetric heat capacity of geological formation simultaneously under the in situ conditions. The feasibility of the present method was assessed by a sample test, the results of which shown that the thermal conductivity and the volumetric heat capacity could be obtained with the relative errors of −0.21% and −0.32%, respectively. In addition, three field tests were conducted based on the easily obtainable well-log temperature data from the steam injection wells. It was found that the relative errors of thermal conductivity for the three field tests were within ±0.6%, demonstrating the excellent performance of the proposed method for calculating thermal conductivity. The relative errors of volumetric heat capacity ranged from −6.1% to −14.2% for the three field tests. Sensitivity analysis indicated that this was due to the low correlation between the volumetric heat capacity and the wellbore temperature, which was used to generate the judgment criterion. -- Highlights: ► A cost-effective in situ method for estimating thermal properties of formation was presented. ► Thermal conductivity and volumetric heat capacity can be estimated simultaneously by the proposed method. ► The relative error of thermal conductivity estimated was within ±0.6%. ► Sensitivity analysis was conducted to study the estimated results of thermal properties.

  16. Limited-memory trust-region methods for sparse relaxation

    Science.gov (United States)

    Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.

    2017-08-01

    In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.

  17. Reliability of Broadcast Communications Under Sparse Random Linear Network Coding

    OpenAIRE

    Brown, Suzie; Johnson, Oliver; Tassi, Andrea

    2018-01-01

    Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kin...

  18. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  19. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  20. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  1. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  2. Sparse Matrices in Frame Theory

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Krahmer, Felix; Kutyniok, Gitta

    2014-01-01

    Frame theory is closely intertwined with signal processing through a canon of methodologies for the analysis of signals using (redundant) linear measurements. The canonical dual frame associated with a frame provides a means for reconstruction by a least squares approach, but other dual frames...... yield alternative reconstruction procedures. The novel paradigm of sparsity has recently entered the area of frame theory in various ways. Of those different sparsity perspectives, we will focus on the situations where frames and (not necessarily canonical) dual frames can be written as sparse matrices...

  3. Diffusion Indexes with Sparse Loadings

    DEFF Research Database (Denmark)

    Kristensen, Johannes Tang

    The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs...... to the problem by using the LASSO as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model which is better suited for forecasting compared...... it to be an important alternative to PC....

  4. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  5. Dynamic Representations of Sparse Graphs

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf

    1999-01-01

    We present a linear space data structure for maintaining graphs with bounded arboricity—a large class of sparse graphs containing e.g. planar graphs and graphs of bounded treewidth—under edge insertions, edge deletions, and adjacency queries. The data structure supports adjacency queries in worst...... case O(c) time, and edge insertions and edge deletions in amortized O(1) and O(c+log n) time, respectively, where n is the number of nodes in the graph, and c is the bound on the arboricity....

  6. Epileptic seizure detection using DWT-based approximate entropy, Shannon entropy and support vector machine: a case study.

    Science.gov (United States)

    Sharmila, A; Aman Raj, Suman; Shashank, Pandey; Mahalakshmi, P

    2018-01-01

    In this work, we have used a time-frequency domain analysis method called discrete wavelet transform (DWT) technique. This method stand out compared to other proposed methods because of its algorithmic elegance and accuracy. A wavelet is a mathematical function based on time-frequency analysis in signal processing. It is useful particularly because it allows a weak signal to be recovered from a noisy signal without much distortion. A wavelet analysis works by analysing the image and converting it to mathematical function which is decoded by the receiver. Furthermore, we have used Shannon entropy and approximate entropy (ApEn) for extracting the complexities associated with electroencephalographic (EEG) signals. The ApEn is a suitable feature to characterise the EEGs because its value drops suddenly due to excessive synchronous discharge of neurons in the brain during epileptic activity in this study. EEG signals are decomposed into six EEG sub-bands namely D1-D5 and A5 using DWT technique. Non-linear features such as ApEn and Shannon entropy are calculated from these sub-bands and support vector machine classifiers are used for classification purpose. This scheme is tested using EEG data recorded from five healthy subjects and five epileptic patients during the inter-ictal and ictal periods. The data are acquired from University of Bonn, Germany. The proposed method is evaluated through 15 classification problems, and obtained high classification accuracy of 100% for two cases and it indicates the good classifying performance of the proposed method.

  7. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  8. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  9. Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model

    Directory of Open Access Journals (Sweden)

    Qi Yuan(Alan

    2010-01-01

    Full Text Available Abstract The problem of uncovering transcriptional regulation by transcription factors (TFs based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ( status and Estrogen Receptor negative ( status, respectively.

  10. Big geo data surface approximation using radial basis functions: A comparative study

    Science.gov (United States)

    Majdisova, Zuzana; Skala, Vaclav

    2017-12-01

    Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.

  11. Atmospheric inverse modeling via sparse reconstruction

    Science.gov (United States)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  12. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-09-08

    In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed

  13. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  14. Quasi optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

    KAUST Repository

    Tamellini, Lorenzo

    2016-01-05

    In this talk we discuss possible strategies to minimize the impact of the curse of dimensionality effect when building sparse-grid approximations of a multivariate function u = u(y1, ..., yN ). More precisely, we present a knapsack approach , in which we estimate the cost and the error reduction contribution of each possible component of the sparse grid, and then we choose the components with the highest error reduction /cost ratio. The estimates of the error reduction are obtained by either a mixed a-priori / a-posteriori approach, in which we first derive a theoretical bound and then tune it with some inexpensive auxiliary computations (resulting in the so-called quasi-optimal sparse grids ), or by a fully a-posteriori approach (obtaining the so-called adaptive sparse grids ). This framework is very general and can be used to build quasi-optimal/adaptive sparse grids on bounded and unbounded domains (e.g. u depending on uniform and normal random distributions for yn), using both nested and non-nested families of univariate collocation points. We present some theoretical convergence results as well as numerical results showing the efficiency of the proposed approach for the approximation of the solution of elliptic PDEs with random diffusion coefficients. In this context, to treat the case of rough permeability fields in which a sparse grid approach may not be suitable, we propose to use the sparse grids as a control variate in a Monte Carlo simulation.

  15. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  16. Low-count PET image restoration using sparse representation

    Science.gov (United States)

    Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli

    2018-04-01

    In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.

  17. Dose-shaping using targeted sparse optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, George A.; Ruan, Dan [Department of Radiation Oncology, University of California - Los Angeles School of Medicine, 200 Medical Plaza, Los Angeles, California 90095 (United States)

    2013-07-15

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot

  18. Dose-shaping using targeted sparse optimization

    International Nuclear Information System (INIS)

    Sayre, George A.; Ruan, Dan

    2013-01-01

    Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E tot sparse ), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L 1 norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E tot sparse improves

  19. Dose-shaping using targeted sparse optimization.

    Science.gov (United States)

    Sayre, George A; Ruan, Dan

    2013-07-01

    Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, the authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method. In designing the energy minimization objective (E tot (sparse)), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L1 norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E tot (sparse) improves tradeoff between

  20. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.