WorldWideScience

Sample records for sparse matrix-vector products

  1. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  2. Efficient implementations of block sparse matrix operations on shared memory vector machines

    International Nuclear Information System (INIS)

    Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.

    2000-01-01

    In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)

  3. Fast sparse matrix-vector multiplication by partitioning and reordering

    NARCIS (Netherlands)

    Yzelman, A.N.

    2011-01-01

    The thesis introduces a cache-oblivious method for the sparse matrix-vector (SpMV) multiplication, which is an important computational kernel in many applications. The method works by permuting rows and columns of the input matrix so that the resulting reordered matrix induces cache-friendly

  4. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  5. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs

    Directory of Open Access Journals (Sweden)

    Guixia He

    2016-01-01

    Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.

  6. Optimizing Sparse Matrix-Multiple Vectors Multiplication for Nuclear Configuration Interaction Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-14

    Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.

  7. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    Science.gov (United States)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  8. Computing the sparse matrix vector product using block-based kernels without zero padding on processors with AVX-512 instructions

    Directory of Open Access Journals (Sweden)

    Bérenger Bramas

    2018-04-01

    Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.

  9. Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation

    OpenAIRE

    Kreutzer, Moritz; Hager, Georg; Wellein, Gerhard; Fehske, Holger; Basermann, Achim; Bishop, Alan R.

    2011-01-01

    Sparse matrix-vector multiplication (spMVM) is the dominant operation in many sparse solvers. We investigate performance properties of spMVM with matrices of various sparsity patterns on the nVidia “Fermi” class of GPGPUs. A new “padded jagged diagonals storage” (pJDS) format is proposed which may substantially reduce the memory overhead intrinsic to the widespread ELLPACK-R scheme while making no assumptions about the matrix structure. In our test scenarios the pJDS format cuts the ...

  10. Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform

    NARCIS (Netherlands)

    Xu, S.; Xue, W.; Lin, H.X.

    2011-01-01

    In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from

  11. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-23

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack

    2016-01-01

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Sparse Matrix-Vector Multiplication on Multicore and Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, Nathan [NVIDIA Research, Santa Clara, CA (United States); Choi, Jee Whan [Georgia Inst. of Technology, Atlanta, GA (United States); Garland, Michael [NVIDIA Research, Santa Clara, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Georgia Inst. of Technology, Atlanta, GA (United States)

    2010-12-07

    This chapter consolidates recent work on the development of high performance multicore and accelerator-based implementations of sparse matrix-vector multiplication (SpMV). As an object of study, SpMV is an interesting computation for two key reasons. First, it appears widely in applications in scientific and engineering computing, financial and economic modeling, and information retrieval, among others, and is therefore of great practical interest. Secondly, it is both simple to describe but challenging to implement well, since its performance is limited by a variety of factors, including low computational intensity, potentially highly irregular memory access behavior, and a strong input dependence that be known only at run time. Thus, we believe SpMV is both practically important and provides important insights for understanding the algorithmic and implementation principles necessary to making effective use of state-of-the-art systems.

  14. The Real-Valued Sparse Direction of Arrival (DOA Estimation Based on the Khatri-Rao Product

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2016-05-01

    Full Text Available There is a problem that complex operation which leads to a heavy calculation burden is required when the direction of arrival (DOA of a sparse signal is estimated by using the array covariance matrix. The solution of the multiple measurement vectors (MMV model is difficult. In this paper, a real-valued sparse DOA estimation algorithm based on the Khatri-Rao (KR product called the L1-RVSKR is proposed. The proposed algorithm is based on the sparse representation of the array covariance matrix. The array covariance matrix is transformed to a real-valued matrix via a unitary transformation so that a real-valued sparse model is achieved. The real-valued sparse model is vectorized for transforming to a single measurement vector (SMV model, and a new virtual overcomplete dictionary is constructed according to the KR product’s property. Finally, the sparse DOA estimation is solved by utilizing the idea of a sparse representation of array covariance vectors (SRACV. The simulation results demonstrate the superior performance and the low computational complexity of the proposed algorithm.

  15. Better Size Estimation for Sparse Matrix Products

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...

  16. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    Science.gov (United States)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  17. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  18. Efficient diagonalization of the sparse matrices produced within the framework of the UK R-matrix molecular codes

    Science.gov (United States)

    Galiatsatos, P. G.; Tennyson, J.

    2012-11-01

    The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.

  19. Optimization of sparse matrix-vector multiplication on emerging multicore platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2007-01-01

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.

  20. Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard; Shalf, John; Yelick, Katherine; Demmel, James

    2008-10-16

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.

  1. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  2. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    Energy Technology Data Exchange (ETDEWEB)

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  3. Speculative segmented sum for sparse matrix-vector multiplication on heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over...

  4. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  5. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  6. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  7. Accelerating the explicitly restarted Arnoldi method with GPUs using an auto-tuned matrix vector product

    International Nuclear Information System (INIS)

    Dubois, J.; Calvin, Ch.; Dubois, J.; Petiton, S.

    2011-01-01

    This paper presents a parallelized hybrid single-vector Arnoldi algorithm for computing approximations to Eigen-pairs of a nonsymmetric matrix. We are interested in the use of accelerators and multi-core units to speed up the Arnoldi process. The main goal is to propose a parallel version of the Arnoldi solver, which can efficiently use multiple multi-core processors or multiple graphics processing units (GPUs) in a mixed coarse and fine grain fashion. In the proposed algorithms, this is achieved by an auto-tuning of the matrix vector product before starting the Arnoldi Eigen-solver as well as the reorganization of the data and global communications so that communication time is reduced. The execution time, performance, and scalability are assessed with well-known dense and sparse test matrices on multiple Nehalems, GT200 NVidia Tesla, and next generation Fermi Tesla. With one processor, we see a performance speedup of 2 to 3x when using all the physical cores, and a total speedup of 2 to 8x when adding a GPU to this multi-core unit, and hence a speedup of 4 to 24x compared to the sequential solver. (authors)

  8. Sparse Vector Distributions and Recovery from Compressed Sensing

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the extent of these effects has yet to be explored, and formally presented. In this paper, I...... empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgement of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector...... distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal....

  9. An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2014-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method, breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM algorithm has to handle extra...... irregularity from three aspects: (1) the number of the nonzero entries in the result sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the result sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input....... Load balancing builds on the number of the necessary arithmetic operations on the nonzero entries and is guaranteed in all stages. Compared with the state-of-the-art GPU SpGEMM methods in the CUSPARSE library and the CUSP library and the latest CPU SpGEMM method in the Intel Math Kernel Library, our...

  10. Implementation of hierarchical clustering using k-mer sparse matrix to analyze MERS-CoV genetic relationship

    Science.gov (United States)

    Bustamam, A.; Ulul, E. D.; Hura, H. F. A.; Siswantining, T.

    2017-07-01

    Hierarchical clustering is one of effective methods in creating a phylogenetic tree based on the distance matrix between DNA (deoxyribonucleic acid) sequences. One of the well-known methods to calculate the distance matrix is k-mer method. Generally, k-mer is more efficient than some distance matrix calculation techniques. The steps of k-mer method are started from creating k-mer sparse matrix, and followed by creating k-mer singular value vectors. The last step is computing the distance amongst vectors. In this paper, we analyze the sequences of MERS-CoV (Middle East Respiratory Syndrome - Coronavirus) DNA by implementing hierarchical clustering using k-mer sparse matrix in order to perform the phylogenetic analysis. Our results show that the ancestor of our MERS-CoV is coming from Egypt. Moreover, we found that the MERS-CoV infection that occurs in one country may not necessarily come from the same country of origin. This suggests that the process of MERS-CoV mutation might not only be influenced by geographical factor.

  11. Sparse and smooth canonical correlation analysis through rank-1 matrix approximation

    Science.gov (United States)

    Aïssa-El-Bey, Abdeldjalil; Seghouane, Abd-Krim

    2017-12-01

    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far, the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper, two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank-1 matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results, it can be observed that they outperform the state of the art sparse CCA algorithms.

  12. Integrated optic vector-matrix multiplier

    Science.gov (United States)

    Watts, Michael R [Albuquerque, NM

    2011-09-27

    A vector-matrix multiplier is disclosed which uses N different wavelengths of light that are modulated with amplitudes representing elements of an N.times.1 vector and combined to form an input wavelength-division multiplexed (WDM) light stream. The input WDM light stream is split into N streamlets from which each wavelength of the light is individually coupled out and modulated for a second time using an input signal representing elements of an M.times.N matrix, and is then coupled into an output waveguide for each streamlet to form an output WDM light stream which is detected to generate a product of the vector and matrix. The vector-matrix multiplier can be formed as an integrated optical circuit using either waveguide amplitude modulators or ring resonator amplitude modulators.

  13. A framework for general sparse matrix-matrix multiplication on GPUs and heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle...... extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data...... memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary...

  14. Massive Asynchronous Parallelization of Sparse Matrix Factorizations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)

    2018-01-08

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  15. The Use of Sparse Direct Solver in Vector Finite Element Modeling for Calculating Two Dimensional (2-D) Magnetotelluric Responses in Transverse Electric (TE) Mode

    Science.gov (United States)

    Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.

    2018-04-01

    The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.

  16. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    Energy Technology Data Exchange (ETDEWEB)

    Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

  17. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  18. Porting of the DBCSR library for Sparse Matrix-Matrix Multiplications to Intel Xeon Phi systems

    OpenAIRE

    Bethune, Iain; Gloess, Andeas; Hutter, Juerg; Lazzaro, Alfio; Pabst, Hans; Reid, Fiona

    2017-01-01

    Multiplication of two sparse matrices is a key operation in the simulation of the electronic structure of systems containing thousands of atoms and electrons. The highly optimized sparse linear algebra library DBCSR (Distributed Block Compressed Sparse Row) has been specifically designed to efficiently perform such sparse matrix-matrix multiplications. This library is the basic building block for linear scaling electronic structure theory and low scaling correlated methods in CP2K. It is para...

  19. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  20. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  1. Sparse Matrix for ECG Identification with Two-Lead Features

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2015-01-01

    Full Text Available Electrocardiograph (ECG human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.

  2. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors

    Science.gov (United States)

    He, Xingyu; Tong, Ningning; Hu, Xiaowei

    2018-01-01

    Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.

  3. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  4. Fully-differential NNLO predictions for vector-boson pair production with MATRIX

    CERN Document Server

    Wiesemann, Marius; Kallweit, Stefan; Rathlev, Dirk

    2016-01-01

    We review the computations of the next-to-next-to-leading order (NNLO) QCD corrections to vector-boson pair production processes in proton–proton collisions and their implementation in the numerical code MATRIX. Our calculations include the leptonic decays of W and Z bosons, consistently taking into account all spin correlations, off-shell effects and non-resonant contributions. For massive vector-boson pairs we show inclusive cross sections, applying the respective mass windows chosen by ATLAS and CMS to define Z bosons from their leptonic decay products, as well as total cross sections for stable bosons. Moreover, we provide samples of differential distributions in fiducial phase-space regions inspired by typical selection cuts used by the LHC experiments. For the vast majority of measurements, the inclusion of NNLO corrections significantly improves the agreement of the Standard Model predictions with data.

  5. Sparse-matrix factorizations for fast symmetric Fourier transforms

    International Nuclear Information System (INIS)

    Sequel, J.

    1987-01-01

    This work proposes new fast algorithms computing the discrete Fourier transform of certain families of symmetric sequences. Sequences commonly found in problems of structure determination by x-ray crystallography and in numerical solutions of boundary-value problems in partial differential equations are dealt with. In the algorithms presented, the redundancies in the input and output data, due to the presence of symmetries in the input data sequence, were eliminated. Using ring-theoretical methods a matrix representation is obtained for the remaining calculations; which factors as the product of a complex block-diagonal matrix times as integral matrix. A basic two-step algorithm scheme arises from this factorization with a first step consisting of pre-additions and a second step containing the calculations involved in computing with the blocks in the block-diagonal factor. These blocks are structured as block-Hankel matrices, and two sparse-matrix factoring formulas are developed in order to diminish their arithmetic complexity

  6. MIMO-OFDM Chirp Waveform Diversity Design and Implementation Based on Sparse Matrix and Correlation Optimization

    Directory of Open Access Journals (Sweden)

    Wang Wen-qin

    2015-02-01

    Full Text Available The waveforms used in Multiple-Input Multiple-Output (MIMO Synthetic Aperture Radar (SAR should have a large time-bandwidth product and good ambiguity function performance. A scheme to design multiple orthogonal MIMO SAR Orthogonal Frequency Division Multiplexing (OFDM chirp waveforms by combinational sparse matrix and correlation optimization is proposed. First, the problem of MIMO SAR waveform design amounts to the associated design of hopping frequency and amplitudes. Then a iterative exhaustive search algorithm is adopted to optimally design the code matrix with the constraints minimizing the block correlation coefficient of sparse matrix and the sum of cross-correlation peaks. And the amplitudes matrix are adaptively designed by minimizing the cross-correlation peaks with the genetic algorithm. Additionally, the impacts of waveform number, hopping frequency interval and selectable frequency index are also analyzed. The simulation results verify the proposed scheme can design multiple orthogonal large time-bandwidth product OFDM chirp waveforms with low cross-correlation peak and sidelobes and it improves ambiguity performance.

  7. Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

    KAUST Repository

    Boukaram, W.

    2015-03-25

    Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.

  8. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  9. Large-region acoustic source mapping using a movable array and sparse covariance fitting.

    Science.gov (United States)

    Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2017-01-01

    Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].

  10. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  11. Inverse Operation of Four-dimensional Vector Matrix

    OpenAIRE

    H J Bao; A J Sang; H X Chen

    2011-01-01

    This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.

  12. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  13. Pulse-Width-Modulation of Neutral-Point-Clamped Sparse Matrix Converter

    DEFF Research Database (Denmark)

    Loh, P.C.; Blaabjerg, Frede; Gao, F.

    2007-01-01

    input current and output voltage can be achieved with minimized rectification switching loss, rendering the sparse matrix converter as a competitive choice for interfacing the utility grid to (e.g.) defense facilities that require a different frequency supply. As an improvement, sparse matrix converter...... with improved waveform quality. Performances and practicalities of the designed schemes are verified in simulation and experimentally using an implemented laboratory prototype with some representative results captured and presented in the paper....

  14. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit; Pfluger, Dirk; Murarasu, Alin; Jacob, Riko

    2012-01-01

    performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations

  15. Improving the Communication Pattern in Matrix-Vector Operations for Large Scale-Free Graphs by Disaggregation

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlemann, Verena [Emory Univ., Atlanta, GA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-10-28

    Matrix-vector multiplication is the key operation in any Krylov-subspace iteration method. We are interested in Krylov methods applied to problems associated with the graph Laplacian arising from large scale-free graphs. Furthermore, computations with graphs of this type on parallel distributed-memory computers are challenging. This is due to the fact that scale-free graphs have a degree distribution that follows a power law, and currently available graph partitioners are not efficient for such an irregular degree distribution. The lack of a good partitioning leads to excessive interprocessor communication requirements during every matrix-vector product. Here, we present an approach to alleviate this problem based on embedding the original irregular graph into a more regular one by disaggregating (splitting up) vertices in the original graph. The matrix-vector operations for the original graph are performed via a factored triple matrix-vector product involving the embedding graph. And even though the latter graph is larger, we are able to decrease the communication requirements considerably and improve the performance of the matrix-vector product.

  16. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P; Maris, Pieter [Department of Physics, Iowa State University, Ames, IA, 50011 (United States); Ng, Esmond; Yang, Chao [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sosonkina, Masha, E-mail: jvary@iastate.ed [Scalable Computing Laboratory, Ames Laboratory, Iowa State University, Ames, IA, 50011 (United States)

    2009-07-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10{sup 10} and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  17. Ab initio nuclear structure - the large sparse matrix eigenvalue problem

    International Nuclear Information System (INIS)

    Vary, James P; Maris, Pieter; Ng, Esmond; Yang, Chao; Sosonkina, Masha

    2009-01-01

    The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10 10 and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.

  18. Parallel transposition of sparse data structures

    DEFF Research Database (Denmark)

    Wang, Hao; Liu, Weifeng; Hou, Kaixi

    2016-01-01

    Many applications in computational sciences and social sciences exploit sparsity and connectivity of acquired data. Even though many parallel sparse primitives such as sparse matrix-vector (SpMV) multiplication have been extensively studied, some other important building blocks, e.g., parallel tr...... transposition in the latest vendor-supplied library on an Intel multicore CPU platform, and the MergeTrans approach achieves on average of 3.4-fold (up to 11.7-fold) speedup on an Intel Xeon Phi many-core processor....

  19. SparseM: A Sparse Matrix Package for R *

    Directory of Open Access Journals (Sweden)

    Roger Koenker

    2003-02-01

    Full Text Available SparseM provides some basic R functionality for linear algebra with sparse matrices. Use of the package is illustrated by a family of linear model fitting functions that implement least squares methods for problems with sparse design matrices. Significant performance improvements in memory utilization and computational speed are possible for applications involving large sparse matrices.

  20. Matrix vector analysis

    CERN Document Server

    Eisenman, Richard L

    2005-01-01

    This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur

  1. Improving residue-residue contact prediction via low-rank and sparse decomposition of residue correlation matrix.

    Science.gov (United States)

    Zhang, Haicang; Gao, Yujuan; Deng, Minghua; Wang, Chao; Zhu, Jianwei; Li, Shuai Cheng; Zheng, Wei-Mou; Bu, Dongbo

    2016-03-25

    Strategies for correlation analysis in protein contact prediction often encounter two challenges, namely, the indirect coupling among residues, and the background correlations mainly caused by phylogenetic biases. While various studies have been conducted on how to disentangle indirect coupling, the removal of background correlations still remains unresolved. Here, we present an approach for removing background correlations via low-rank and sparse decomposition (LRS) of a residue correlation matrix. The correlation matrix can be constructed using either local inference strategies (e.g., mutual information, or MI) or global inference strategies (e.g., direct coupling analysis, or DCA). In our approach, a correlation matrix was decomposed into two components, i.e., a low-rank component representing background correlations, and a sparse component representing true correlations. Finally the residue contacts were inferred from the sparse component of correlation matrix. We trained our LRS-based method on the PSICOV dataset, and tested it on both GREMLIN and CASP11 datasets. Our experimental results suggested that LRS significantly improves the contact prediction precision. For example, when equipped with the LRS technique, the prediction precision of MI and mfDCA increased from 0.25 to 0.67 and from 0.58 to 0.70, respectively (Top L/10 predicted contacts, sequence separation: 5 AA, dataset: GREMLIN). In addition, our LRS technique also consistently outperforms the popular denoising technique APC (average product correction), on both local (MI_LRS: 0.67 vs MI_APC: 0.34) and global measures (mfDCA_LRS: 0.70 vs mfDCA_APC: 0.67). Interestingly, we found out that when equipped with our LRS technique, local inference strategies performed in a comparable manner to that of global inference strategies, implying that the application of LRS technique narrowed down the performance gap between local and global inference strategies. Overall, our LRS technique greatly facilitates

  2. Improved success of sparse matrix protein crystallization screening with heterogeneous nucleating agents.

    Directory of Open Access Journals (Sweden)

    Anil S Thakur

    2007-10-01

    Full Text Available Crystallization is a major bottleneck in the process of macromolecular structure determination by X-ray crystallography. Successful crystallization requires the formation of nuclei and their subsequent growth to crystals of suitable size. Crystal growth generally occurs spontaneously in a supersaturated solution as a result of homogenous nucleation. However, in a typical sparse matrix screening experiment, precipitant and protein concentration are not sampled extensively, and supersaturation conditions suitable for nucleation are often missed.We tested the effect of nine potential heterogenous nucleating agents on crystallization of ten test proteins in a sparse matrix screen. Several nucleating agents induced crystal formation under conditions where no crystallization occurred in the absence of the nucleating agent. Four nucleating agents: dried seaweed; horse hair; cellulose and hydroxyapatite, had a considerable overall positive effect on crystallization success. This effect was further enhanced when these nucleating agents were used in combination with each other.Our results suggest that the addition of heterogeneous nucleating agents increases the chances of crystal formation when using sparse matrix screens.

  3. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  4. Ordering sparse matrices for cache-based systems

    International Nuclear Information System (INIS)

    Biswas, Rupak; Oliker, Leonid

    2001-01-01

    The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning

  5. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    Science.gov (United States)

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  7. Matrix elements of a hyperbolic vector operator under SO(2,1)

    International Nuclear Information System (INIS)

    Zettili, N.; Boukahil, A.

    2003-01-01

    We deal here with the use of Wigner–Eckart type arguments to calculate the matrix elements of a hyperbolic vector operator V-vector by expressing them in terms of reduced matrix elements. In particular, we focus on calculating the matrix elements of this vector operator within the basis of the hyperbolic angular momentum T-vector whose components T-vector 1 , T-vector 2 , T-vector 3 satisfy an SO(2,1) Lie algebra. We show that the commutation rules between the components of V-vector and T-vector can be inferred from the algebra of ordinary angular momentum. We then show that, by analogy to the Wigner–Eckart theorem, we can calculate the matrix elements of V-vector within a representation where T-vector 2 and T-vector 3 are jointly diagonal. (author)

  8. Finding a Hadamard matrix by simulated annealing of spin vectors

    Science.gov (United States)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  9. Doubly Nonparametric Sparse Nonnegative Matrix Factorization Based on Dependent Indian Buffet Processes.

    Science.gov (United States)

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Xu, Richard Yi Da; Luo, Xiangfeng

    2018-05-01

    Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.

  10. A FPC-ROOT Algorithm for 2D-DOA Estimation in Sparse Array

    Directory of Open Access Journals (Sweden)

    Wenhao Zeng

    2016-01-01

    Full Text Available To improve the performance of two-dimensional direction-of-arrival (2D DOA estimation in sparse array, this paper presents a Fixed Point Continuation Polynomial Roots (FPC-ROOT algorithm. Firstly, a signal model for DOA estimation is established based on matrix completion and it can be proved that the proposed model meets Null Space Property (NSP. Secondly, left and right singular vectors of received signals matrix are achieved using the matrix completion algorithm. Finally, 2D DOA estimation can be acquired through solving the polynomial roots. The proposed algorithm can achieve high accuracy of 2D DOA estimation in sparse array, without solving autocorrelation matrix of received signals and scanning of two-dimensional spectral peak. Besides, it decreases the number of antennas and lowers computational complexity and meanwhile avoids the angle ambiguity problem. Computer simulations demonstrate that the proposed FPC-ROOT algorithm can obtain the 2D DOA estimation precisely in sparse array.

  11. A Non-static Data Layout Enhancing Parallelism and Vectorization in Sparse Grid Algorithms

    KAUST Repository

    Buse, Gerrit

    2012-06-01

    The name sparse grids denotes a highly space-efficient, grid-based numerical technique to approximate high-dimensional functions. Although employed in a broad spectrum of applications from different fields, there have only been few tries to use it in real time visualization (e.g. [1]), due to complex data structures and long algorithm runtime. In this work we present a novel approach inspired by principles of I/0-efficient algorithms. Locally applied coefficient permutations lead to improved cache performance and facilitate the use of vector registers for our sparse grid benchmark problem hierarchization. Based on the compact data structure proposed for regular sparse grids in [2], we developed a new algorithm that outperforms existing implementations on modern multi-core systems by a factor of 37 for a grid size of 127 million points. For larger problems the speedup is even increasing, and with execution times below 1 s, sparse grids are well-suited for visualization applications. Furthermore, we point out how a broad class of sparse grid algorithms can benefit from our approach. © 2012 IEEE.

  12. Sparse Covariance Matrix Estimation by DCA-Based Algorithms.

    Science.gov (United States)

    Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham

    2017-11-01

    This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

  13. MODELING OF DYNAMIC SYSTEMS WITH MODULATION BY MEANS OF KRONECKER VECTOR-MATRIX REPRESENTATION

    Directory of Open Access Journals (Sweden)

    A. S. Vasilyev

    2015-09-01

    Full Text Available The paper deals with modeling of dynamic systems with modulation by the possibilities of state-space method. This method, being the basis of modern control theory, is based on the possibilities of vector-matrix formalism of linear algebra and helps to solve various problems of technical control of continuous and discrete nature invariant with respect to the dimension of their “input-output” objects. Unfortunately, it turned its back on the wide group of control systems, which hardware environment modulates signals. The marked system deficiency is partially offset by this paper, which proposes Kronecker vector-matrix representations for purposes of system representation of processes with signal modulation. The main result is vector-matrix representation of processes with modulation with no formal difference from continuous systems. It has been found that abilities of these representations could be effectively used in research of systems with modulation. Obtained model representations of processes with modulation are best adapted to the state-space method. These approaches for counting eigenvalues of Kronecker matrix summaries, that are matrix basis of model representations of processes described by Kronecker vector products, give the possibility to use modal direction in research of dynamics for systems with modulation. It is shown that the use of controllability for eigenvalues of general matrixes applied to Kronecker structures enabled to divide successfully eigenvalue spectrum into directed and not directed components. Obtained findings including design problems for models of dynamic processes with modulation based on the features of Kronecker vector and matrix structures, invariant with respect to the dimension of input-output relations, are applicable in the development of alternate current servo drives.

  14. A Spectral Algorithm for Envelope Reduction of Sparse Matrices

    Science.gov (United States)

    Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.

    1993-01-01

    The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.

  15. An efficient optical architecture for sparsely connected neural networks

    Science.gov (United States)

    Hine, Butler P., III; Downie, John D.; Reid, Max B.

    1990-01-01

    An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.

  16. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  17. Very-short-term wind power probabilistic forecasts by sparse vector autoregression

    DEFF Research Database (Denmark)

    Dowell, Jethro; Pinson, Pierre

    2016-01-01

    A spatio-temporal method for producing very-shortterm parametric probabilistic wind power forecasts at a large number of locations is presented. Smart grids containing tens, or hundreds, of wind generators require skilled very-short-term forecasts to operate effectively, and spatial information...... is highly desirable. In addition, probabilistic forecasts are widely regarded as necessary for optimal power system management as they quantify the uncertainty associated with point forecasts. Here we work within a parametric framework based on the logit-normal distribution and forecast its parameters....... The location parameter for multiple wind farms is modelled as a vector-valued spatiotemporal process, and the scale parameter is tracked by modified exponential smoothing. A state-of-the-art technique for fitting sparse vector autoregressive models is employed to model the location parameter and demonstrates...

  18. New sparse matrix solver in the KIKO3D 3-dimensional reactor dynamics code

    International Nuclear Information System (INIS)

    Panka, I.; Kereszturi, A.; Hegedus, C.

    2005-01-01

    The goal of this paper is to present a more effective method Bi-CGSTAB for accelerating the large sparse matrix equation solution in the KIKO3D code. This equation system is obtained by using the factorization of the improved quasi static (IQS) method for the time dependent nodal kinetic equations. In the old methodology standard large sparse matrix techniques were considered, where Gauss-Seidel preconditioning and a GMRES-type solver were applied. The validation of KIKO3D using Bi-CGSTAB has been performed by solving of a VVER-1000 kinetic benchmark problem. Additionally, the convergence characteristics were investigated in given macro time steps of Control Rod Ejection transients. The results have been obtained by the old GMRES and new Bi-CGSTAB methods are compared. (author)

  19. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  20. The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

    Directory of Open Access Journals (Sweden)

    Wu Qidi

    2014-01-01

    Full Text Available Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

  1. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

    Directory of Open Access Journals (Sweden)

    A. A. Zolotin

    2015-07-01

    Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

  2. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction

    International Nuclear Information System (INIS)

    Yang, C L; Wei, H Y; Soleimani, M; Adler, A

    2013-01-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current–voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results. (paper)

  3. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    Science.gov (United States)

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  4. Turbo-SMT: Parallel Coupled Sparse Matrix-Tensor Factorizations and Applications

    Science.gov (United States)

    Papalexakis, Evangelos E.; Faloutsos, Christos; Mitchell, Tom M.; Talukdar, Partha Pratim; Sidiropoulos, Nicholas D.; Murphy, Brian

    2016-01-01

    How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ’edible’, ’fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem. Can we enhance any CMTF solver, so that it can operate on potentially very large datasets that may not fit in main memory? We introduce Turbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, produces sparse and interpretable solutions, and parallelizes any CMTF algorithm, producing sparse and interpretable solutions (up to 65 fold). Additionally, we improve upon ALS, the work-horse algorithm for CMTF, with respect to efficiency and robustness to missing values. We apply Turbo-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Turbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Turbo-SMT, by applying it on a Facebook dataset (users, ’friends’, wall-postings); there, Turbo-SMT spots spammer-like anomalies. PMID:27672406

  5. Algorithms for sparse, symmetric, definite quadratic lambda-matrix eigenproblems

    International Nuclear Information System (INIS)

    Scott, D.S.; Ward, R.C.

    1981-01-01

    Methods are presented for computing eigenpairs of the quadratic lambda-matrix, M lambda 2 + C lambda + K, where M, C, and K are large and sparse, and have special symmetry-type properties. These properties are sufficient to insure that all the eigenvalues are real and that theory analogous to the standard symmetric eigenproblem exists. The methods employ some standard techniques such as partial tri-diagonalization via the Lanczos Method and subsequent eigenpair calculation, shift-and- invert strategy and subspace iteration. The methods also employ some new techniques such as Rayleigh-Ritz quadratic roots and the inertia of symmetric, definite, quadratic lambda-matrices

  6. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    Science.gov (United States)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  7. Auto-tuning Dense Vector and Matrix-vector Operations for Fermi GPUs

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    applications. As examples, we develop single-precision CUDA kernels for the Euclidian norm (SNRM2) and the matrix-vector multiplication (SGEMV). The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture). We show that auto-tuning can be successfully applied to achieve high performance...

  8. Robust extraction of basis functions for simultaneous and proportional myoelectric control via sparse non-negative matrix factorization

    Science.gov (United States)

    Lin, Chuang; Wang, Binghui; Jiang, Ning; Farina, Dario

    2018-04-01

    Objective. This paper proposes a novel simultaneous and proportional multiple degree of freedom (DOF) myoelectric control method for active prostheses. Approach. The approach is based on non-negative matrix factorization (NMF) of surface EMG signals with the inclusion of sparseness constraints. By applying a sparseness constraint to the control signal matrix, it is possible to extract the basis information from arbitrary movements (quasi-unsupervised approach) for multiple DOFs concurrently. Main Results. In online testing based on target hitting, able-bodied subjects reached a greater throughput (TP) when using sparse NMF (SNMF) than with classic NMF or with linear regression (LR). Accordingly, the completion time (CT) was shorter for SNMF than NMF or LR. The same observations were made in two patients with unilateral limb deficiencies. Significance. The addition of sparseness constraints to NMF allows for a quasi-unsupervised approach to myoelectric control with superior results with respect to previous methods for the simultaneous and proportional control of multi-DOF. The proposed factorization algorithm allows robust simultaneous and proportional control, is superior to previous supervised algorithms, and, because of minimal supervision, paves the way to online adaptation in myoelectric control.

  9. Practical improvements and merging of POWHEG simulations for vector boson production

    International Nuclear Information System (INIS)

    Aliolo, Simone; Hamilton, Keith; Re, Emanuele

    2011-10-01

    In this article we generalise POWHEG next-to-leading order parton shower (NLOPS) simulations of vector boson production and vector boson production in association with a single jet, to give matrix element corrected MENLOPS simulations. In so doing we extend and provide, for the first time, an exact and faithful implementation of the MENLOPS formalism in hadronic collisions. We also consider merging the resulting event samples according to a phase space partition defined in terms of an effective jet clustering scale. The merging scale is restricted such that the component generated by the associated production simulation does not impact on the NLO accuracy of inclusive vector boson production observables. The dependence of the predictions on the unphysical merging scale is demonstrated. Comparisons with Tevatron and LHC data are presented. (orig.)

  10. Practical improvements and merging of POWHEG simulations for vector boson production

    Energy Technology Data Exchange (ETDEWEB)

    Aliolo, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Hamilton, Keith [Istituto Nazionale di Fisica Nucleare, Milan (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Dept. of Physics

    2011-10-15

    In this article we generalise POWHEG next-to-leading order parton shower (NLOPS) simulations of vector boson production and vector boson production in association with a single jet, to give matrix element corrected MENLOPS simulations. In so doing we extend and provide, for the first time, an exact and faithful implementation of the MENLOPS formalism in hadronic collisions. We also consider merging the resulting event samples according to a phase space partition defined in terms of an effective jet clustering scale. The merging scale is restricted such that the component generated by the associated production simulation does not impact on the NLO accuracy of inclusive vector boson production observables. The dependence of the predictions on the unphysical merging scale is demonstrated. Comparisons with Tevatron and LHC data are presented. (orig.)

  11. Vector-vector production in photon-photon interactions

    International Nuclear Information System (INIS)

    Ronan, M.T.

    1988-01-01

    Measurements of exclusive untagged /rho/ 0 /rho/ 0 , /rho//phi/, K/sup *//bar K//sup */, and /rho/ω production and tagged /rho/ 0 /rho/ 0 production in photon-photon interactions by the TPC/Two-Gamma experiment are reviewed. Comparisons to the results of other experiments and to models of vector-vector production are made. Fits to the data following a four quark model prescription for vector meson pair production are also presented. 10 refs., 9 figs

  12. Migration of vectorized iterative solvers to distributed memory architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pommerell, C. [AT& T Bell Labs., Murray Hill, NJ (United States); Ruehl, R. [CSCS-ETH, Manno (Switzerland)

    1994-12-31

    Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.

  13. Discrete-ordinate method with matrix exponential for a pseudo-spherical atmosphere: Vector case

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    The paper is devoted to the extension of the matrix-exponential formalism for the scalar radiative transfer to the vector case. Using basic results of the theory of matrix-exponential functions we provide a compact and versatile formulation of the vector radiative transfer. As in the scalar case, we operate with the concept of the layer equation incorporating the level values of the Stokes vector. The matrix exponentials which enter in the expression of the layer equation are computed by using the matrix eigenvalue method and the Pade approximation. A discussion of the computational efficiency of the proposed method for both an aerosol-loaded atmosphere as well as a cloudy atmosphere is also provided

  14. Technique detection software for Sparse Matrices

    Directory of Open Access Journals (Sweden)

    KHAN Muhammad Taimoor

    2009-12-01

    Full Text Available Sparse storage formats are techniques for storing and processing the sparse matrix data efficiently. The performance of these storage formats depend upon the distribution of non-zeros, within the matrix in different dimensions. In order to have better results we need a technique that suits best the organization of data in a particular matrix. So the decision of selecting a better technique is the main step towards improving the system's results otherwise the efficiency can be decreased. The purpose of this research is to help identify the best storage format in case of reduced storage size and high processing efficiency for a sparse matrix.

  15. Response of selected binomial coefficients to varying degrees of matrix sparseness and to matrices with known data interrelationships

    Science.gov (United States)

    Archer, A.W.; Maples, C.G.

    1989-01-01

    Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.

  16. Gamow-Jordan vectors and non-reducible density operators from higher-order S-matrix poles

    International Nuclear Information System (INIS)

    Bohm, A.; Loewe, M.; Maxson, S.; Patuleanu, P.; Puentmann, C.; Gadella, M.

    1997-01-01

    In analogy to Gamow vectors that are obtained from first-order resonance poles of the S-matrix, one can also define higher-order Gamow vectors which are derived from higher-order poles of the S-matrix. An S-matrix pole of r-th order at z R =E R -iΓ/2 leads to r generalized eigenvectors of order k=0,1,hor-ellipsis,r-1, which are also Jordan vectors of degree (k+1) with generalized eigenvalue (E R -iΓ/2). The Gamow-Jordan vectors are elements of a generalized complex eigenvector expansion, whose form suggests the definition of a state operator (density matrix) for the microphysical decaying state of this higher-order pole. This microphysical state is a mixture of non-reducible components. In spite of the fact that the k-th order Gamow-Jordan vectors has the polynomial time-dependence which one always associates with higher-order poles, the microphysical state obeys a purely exponential decay law. copyright 1997 American Institute of Physics

  17. Local posterior concentration rate for multilevel sparse sequences

    NARCIS (Netherlands)

    Belitser, E.N.; Nurushev, N.

    2017-01-01

    We consider empirical Bayesian inference in the many normal means model in the situation when the high-dimensional mean vector is multilevel sparse, that is,most of the entries of the parameter vector are some fixed values. For instance, the traditional sparse signal is a particular case (with one

  18. SIAM 1978 fall meeting and symposium on sparse matrix computations. [Knoxville, Tenn. , October 30--November 3

    Energy Technology Data Exchange (ETDEWEB)

    1978-01-01

    The program and abstracts of the SIAM 1978 fall meeting in Knoxville, Tennessee, are given, along with those of the associated symposium on sparse matrix computations. The papers dealt with both pure mathematics and mathematics applied to many different subject areas. (RWR)

  19. Some Algorithms for the Conditional Mean Vector and Covariance Matrix

    Directory of Open Access Journals (Sweden)

    John F. Monahan

    2006-08-01

    Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.

  20. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  1. Sparse Representation Based SAR Vehicle Recognition along with Aspect Angle

    Directory of Open Access Journals (Sweden)

    Xiangwei Xing

    2014-01-01

    Full Text Available As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC has attracted much attention in synthetic aperture radar (SAR automatic target recognition (ATR recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA, in which the correlation between the vehicle’s aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle’s aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.

  2. Sparse matrix test collections

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  3. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  4. Sparse Nonnegative Matrix Factorization Strategy for Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Hongmei Hu

    2015-12-01

    Full Text Available Current cochlear implant (CI strategies carry speech information via the waveform envelope in frequency subbands. CIs require efficient speech processing to maximize information transfer to the brain, especially in background noise, where the speech envelope is not robust to noise interference. In such conditions, the envelope, after decomposition into frequency bands, may be enhanced by sparse transformations, such as nonnegative matrix factorization (NMF. Here, a novel CI processing algorithm is described, which works by applying NMF to the envelope matrix (envelopogram of 22 frequency channels in order to improve performance in noisy environments. It is evaluated for speech in eight-talker babble noise. The critical sparsity constraint parameter was first tuned using objective measures and then evaluated with subjective speech perception experiments for both normal hearing and CI subjects. Results from vocoder simulations with 10 normal hearing subjects showed that the algorithm significantly enhances speech intelligibility with the selected sparsity constraints. Results from eight CI subjects showed no significant overall improvement compared with the standard advanced combination encoder algorithm, but a trend toward improvement of word identification of about 10 percentage points at +15 dB signal-to-noise ratio (SNR was observed in the eight CI subjects. Additionally, a considerable reduction of the spread of speech perception performance from 40% to 93% for advanced combination encoder to 80% to 100% for the suggested NMF coding strategy was observed.

  5. Linear Matrix Inequalities for Analysis and Control of Linear Vector Second-Order Systems

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2015-01-01

    the Lyapunov matrix and the system matrices by introducing matrix multipliers, which potentially reduce conservativeness in hard control problems. Multipliers facilitate the usage of parameter-dependent Lyapunov functions as certificates of stability of uncertain and time-varying vector second-order systems......SUMMARY Many dynamical systems are modeled as vector second-order differential equations. This paper presents analysis and synthesis conditions in terms of LMI with explicit dependence in the coefficient matrices of vector second-order systems. These conditions benefit from the separation between....... The conditions introduced in this work have the potential to increase the practice of analyzing and controlling systems directly in vector second-order form. Copyright © 2014 John Wiley & Sons, Ltd....

  6. Design and experimental verification for optical module of optical vector-matrix multiplier.

    Science.gov (United States)

    Zhu, Weiwei; Zhang, Lei; Lu, Yangyang; Zhou, Ping; Yang, Lin

    2013-06-20

    Optical computing is a new method to implement signal processing functions. The multiplication between a vector and a matrix is an important arithmetic algorithm in the signal processing domain. The optical vector-matrix multiplier (OVMM) is an optoelectronic system to carry out this operation, which consists of an electronic module and an optical module. In this paper, we propose an optical module for OVMM. To eliminate the cross talk and make full use of the optical elements, an elaborately designed structure that involves spherical lenses and cylindrical lenses is utilized in this optical system. The optical design software package ZEMAX is used to optimize the parameters and simulate the whole system. Finally, experimental data is obtained through experiments to evaluate the overall performance of the system. The results of both simulation and experiment indicate that the system constructed can implement the multiplication between a matrix with dimensions of 16 by 16 and a vector with a dimension of 16 successfully.

  7. Vector Fields on Product Manifolds

    OpenAIRE

    Kurz, Stefan

    2011-01-01

    This short report establishes some basic properties of smooth vector fields on product manifolds. The main results are: (i) On a product manifold there always exists a direct sum decomposition into horizontal and vertical vector fields. (ii) Horizontal and vertical vector fields are naturally isomorphic to smooth families of vector fields defined on the factors. Vector fields are regarded as derivations of the algebra of smooth functions.

  8. Matrix product representation of the stationary state of the open zero range process

    Science.gov (United States)

    Bertin, Eric; Vanicat, Matthieu

    2018-06-01

    Many one-dimensional lattice particle models with open boundaries, like the paradigmatic asymmetric simple exclusion process (ASEP), have their stationary states represented in the form of a matrix product, with matrices that do not explicitly depend on the lattice site. In contrast, the stationary state of the open 1D zero-range process (ZRP) takes an inhomogeneous factorized form, with site-dependent probability weights. We show that in spite of the absence of correlations, the stationary state of the open ZRP can also be represented in a matrix product form, where the matrices are site-independent, non-commuting and determined from algebraic relations resulting from the master equation. We recover the known distribution of the open ZRP in two different ways: first, using an explicit representation of the matrices and boundary vectors; second, from the sole knowledge of the algebraic relations satisfied by these matrices and vectors. Finally, an interpretation of the relation between the matrix product form and the inhomogeneous factorized form is proposed within the framework of hidden Markov chains.

  9. Optimisation du produit matrice-vecteur creux sur architecture GPU pour un simulateur de réservoir

    OpenAIRE

    Rossignon , Corentin

    2013-01-01

    National audience; For the Total Company, simulating reservoirs is an important step in the process of optimizing production. Nowadays, these simulations run entirely on CPUs. Thus, we have attempted to accelerate the sparse matrix-vector product operators of the simulation by using GPUs. Common GPU libraries for sparse linear algebra use generic formats for sparse matrix storage, that are more or less performant on GPU but that do not allow to fully exploit the specific structure of the matr...

  10. Sparse modeling of EELS and EDX spectral imaging data by nonnegative matrix factorization

    Energy Technology Data Exchange (ETDEWEB)

    Shiga, Motoki, E-mail: shiga_m@gifu-u.ac.jp [Department of Electrical, Electronic and Computer Engineering, Gifu University, 1-1, Yanagido, Gifu 501-1193 (Japan); Tatsumi, Kazuyoshi; Muto, Shunsuke [Advanced Measurement Technology Center, Institute of Materials and Systems for Sustainability, Nagoya University, Chikusa-ku, Nagoya 464-8603 (Japan); Tsuda, Koji [Graduate School of Frontier Sciences, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa 277-8561 (Japan); Center for Materials Research by Information Integration, National Institute for Materials Science, 1-2-1 Sengen, Tsukuba 305-0047 (Japan); Biotechnology Research Institute for Drug Discovery, National Institute of Advanced Industrial Science and Technology, 2-4-7 Aomi Koto-ku, Tokyo 135-0064 (Japan); Yamamoto, Yuta [High-Voltage Electron Microscope Laboratory, Institute of Materials and Systems for Sustainability, Nagoya University, Chikusa-ku, Nagoya 464-8603 (Japan); Mori, Toshiyuki [Environment and Energy Materials Division, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044 (Japan); Tanji, Takayoshi [Division of Materials Research, Institute of Materials and Systems for Sustainability, Nagoya University, Chikusa-ku, Nagoya 464-8603 (Japan)

    2016-11-15

    Advances in scanning transmission electron microscopy (STEM) techniques have enabled us to automatically obtain electron energy-loss (EELS)/energy-dispersive X-ray (EDX) spectral datasets from a specified region of interest (ROI) at an arbitrary step width, called spectral imaging (SI). Instead of manually identifying the potential constituent chemical components from the ROI and determining the chemical state of each spectral component from the SI data stored in a huge three-dimensional matrix, it is more effective and efficient to use a statistical approach for the automatic resolution and extraction of the underlying chemical components. Among many different statistical approaches, we adopt a non-negative matrix factorization (NMF) technique, mainly because of the natural assumption of non-negative values in the spectra and cardinalities of chemical components, which are always positive in actual data. This paper proposes a new NMF model with two penalty terms: (i) an automatic relevance determination (ARD) prior, which optimizes the number of components, and (ii) a soft orthogonal constraint, which clearly resolves each spectrum component. For the factorization, we further propose a fast optimization algorithm based on hierarchical alternating least-squares. Numerical experiments using both phantom and real STEM-EDX/EELS SI datasets demonstrate that the ARD prior successfully identifies the correct number of physically meaningful components. The soft orthogonal constraint is also shown to be effective, particularly for STEM-EELS SI data, where neither the spatial nor spectral entries in the matrices are sparse. - Highlights: • Automatic resolution of chemical components from spectral imaging is considered. • We propose a new non-negative matrix factorization with two new penalties. • The first penalty is sparseness to choose the number of components from data. • Experimental results with real data demonstrate effectiveness of our method.

  11. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  12. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  13. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  14. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  15. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  16. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  17. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  18. Sparse PCA with Oracle Property.

    Science.gov (United States)

    Gu, Quanquan; Wang, Zhaoran; Liu, Han

    In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.

  19. A matrix formulation of Frobenius power series solutions using products of 4X4 matrices

    Directory of Open Access Journals (Sweden)

    Jeremy Mandelkern

    2015-08-01

    Full Text Available In Coddington and Levison [7, p. 119, Thm. 4.1] and Balser [4, p. 18-19, Thm. 5], matrix formulations of Frobenius theory, near a regular singular point, are given using 2X2 matrix recurrence relations yielding fundamental matrices consisting of two linearly independent solutions together with their quasi-derivatives. In this article we apply a reformulation of these matrix methods to the Bessel equation of nonintegral order. The reformulated approach of this article differs from [7] and [4] by its implementation of a new ``vectorization'' procedure that yields recurrence relations of an altogether different form: namely, it replaces the implicit 2X2 matrix recurrence relations of both [7] and [4] by explicit 4X4 matrix recurrence relations that are implemented by means only of 4X4 matrix products. This new idea of using a vectorization procedure may further enable the development of symbolic manipulator programs for matrix forms of the Frobenius theory.

  20. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    Science.gov (United States)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  1. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  2. Production of lentiviral vectors

    Directory of Open Access Journals (Sweden)

    Otto-Wilhelm Merten

    2016-01-01

    Full Text Available Lentiviral vectors (LV have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented.

  3. Mixed Analog/Digital Matrix-Vector Multiplier for Neural Network Synapses

    DEFF Research Database (Denmark)

    Lehmann, Torsten; Bruun, Erik; Dietrich, Casper

    1996-01-01

    In this work we present a hardware efficient matrix-vector multiplier architecture for artificial neural networks with digitally stored synapse strengths. We present a novel technique for manipulating bipolar inputs based on an analog two's complements method and an accurate current rectifier...

  4. ATLAS measurements of vector boson production

    CERN Document Server

    Levchenko, M; The ATLAS collaboration

    2014-01-01

    ATLAS measurements of vector boson production with associated jets Productions of light and heavy-flavour jets in association with a W or a Z boson in proton-proton collisions are important processes to study QCD in multi-scale environments and the proton parton content. The cross section, differential in several kinematics variables, have been measured with the ATLAS detector in 7 TeV proton-proton collisions and compared to high-order QCD calculations and Monte Carlo simulations. The results demonstrate the need for the inclusion of high-multiplicity matrix elements in the calculations of high jet multiplicities. The ratio of (Z+jets)/(W+jets) provides a precise test of QCD due to the large cancellations of theoretical and experimental uncertainties. Measurement of W+c production cross section has a unique sensitivity to the strange-quark density, which is poorly known at low x. W or Z boson production in association with b-quark jets, on the other hand, probes the b-quark density in the proton and the b-qu...

  5. Comparison between sparsely distributed memory and Hopfield-type neural network models

    Science.gov (United States)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  6. Hierarchical probing for estimating the trace of the matrix inverse on toroidal lattices

    Energy Technology Data Exchange (ETDEWEB)

    Stathopoulos, Andreas [College of William and Mary, Williamsburg, VA; Laeuchli, Jesse [College of William and Mary, Williamsburg, VA; Orginos, Kostas [College of William and Mary, Williamsburg, VA; Jefferson Lab

    2013-10-01

    The standard approach for computing the trace of the inverse of a very large, sparse matrix $A$ is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, the elements of $A^{-1}$ display certain decay properties away from the non zero structure of $A$, but random vectors cannot exploit this induced structure of $A^{-1}$. Probing is a technique that, given a sparsity pattern of $A$, discovers elements of $A$ through matrix-vector multiplications with specially designed vectors. In the case of $A^{-1}$, the pattern is obtained by distance-$k$ coloring of the graph of $A$. For sufficiently large $k$, the method produces accurate trace estimates but the cost of producing the colorings becomes prohibitively expensive. More importantly, it is difficult to search for an optimal $k$ value, since none of the work for prior choices of $k$ can be reused.

  7. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  8. General analysis of the intermediate vector boson production in hadron collisions

    International Nuclear Information System (INIS)

    Rekalo, M.P.

    1981-01-01

    Polarization states of intermediate vector W(Z) bosons produced in inclusive hadron-hadron collisions, a 1 +a 2 →W(Z)+X, where a 1 a 2 are some nonpolarized hadrons, X is a nondetected particle assembly are studied in general. Since spatial parity in these processes is not preserved, polarized W(Z) boson production must be determined by 9 real structural functions, 4 of which characterize the effects of invariance violation in relation to spatial reflections. Three structural functions (P-odd) for pole W(Z) production mechanisms in the quantum chromodynamics should exactly equal zero (with regard to the gluon errors). The relation of the W(Z) matrix elements with the structural functions in different coordinate systems used while discussing the angular distributions of W(Z) boson decay products is established. Decription of the W(Z) polarization properties is analyzed in terms of the 4-vector of spin and tenzor of quadrupole polarization [ru

  9. Improvement of the Convergence of the Invariant Imbedding T-Matrix Method

    Science.gov (United States)

    Zhai, S.; Panetta, R. L.; Yang, P.

    2017-12-01

    The invariant imbedding T-matrix method (IITM) is based on an electromagnetic volume integral equation to compute the T-matrix of an arbitrary scattering particle. A free-space Green's function is chosen as the integral kernel and thus each source point is placed in an imaginary vacuum spherical shell extending from the center to that source point. The final T-matrix (of the largest circumscribing sphere) is obtained through an iterative relation that, layer by layer, computes the T-matrix from the particle center to the outermost shell. On each spherical shell surface, an integration of the product of the refractive index 𝜀(𝜃, 𝜑) and vector spherical harmonics must be performed, resulting in the so-called U-matrix, which directly leads to the T-matrix on the spherical surface. Our observations indicate that the matrix size and sparseness are determined by the particular refractive index function 𝜀(𝜃, 𝜑). If 𝜀(𝜃, 𝜑) is an analytic function on the surface, then the matrix elements resulting from the integration decay rapidly, leading to sparse matrix; if 𝜀(𝜃, 𝜑) is not (for example, contains jump discontinuities), then the matrix elements decay slowly, leading to a large dense matrix. The intersection between an irregular scatterer and each spherical shell can leave jump discontinuities in 𝜀(𝜃, 𝜑) distributed over the shell surface. The aforementioned feature is analogous to the Gibbs phenomenon appearing in the orthogonal expansion of non-smooth functions with Hermitian eigenfunctions (complex exponential, Legendre, Bessel,...) where poor convergence speed is a direct consequence of the slow decay rate of the expansion coefficients. Various methods have been developed to deal with this slow convergence in the presence of discontinuities. Among the different approaches the most practical one may be a spectral filter: a filter is applied on the

  10. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  11. Video deraining and desnowing using temporal correlation and low-rank matrix completion.

    Science.gov (United States)

    Kim, Jin-Hwan; Sim, Jae-Young; Kim, Chang-Su

    2015-09-01

    A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.

  12. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    Science.gov (United States)

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  13. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    Science.gov (United States)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  14. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  15. Rational calculation accuracy in acousto-optical matrix-vector processor

    Science.gov (United States)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  16. A direct derivation of the exact Fisther information matrix of Gaussian vector state space models

    NARCIS (Netherlands)

    Klein, A.A.B.; Neudecker, H.

    2000-01-01

    This paper deals with a direct derivation of Fisher's information matrix of vector state space models for the general case, by which is meant the establishment of the matrix as a whole and not element by element. The method to be used is matrix differentiation, see [4]. We assume the model to be

  17. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  18. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  19. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  20. On particle creation by black holes. [Quantum mechanical state vector, gravitational collapse, Hermition scalar field, density matrix

    Energy Technology Data Exchange (ETDEWEB)

    Wald, R M [Chicago Univ., Ill. (USA). Lab. for Astrophysics and Space Research

    1975-11-01

    Hawking's analysis of particle creation by black holes is extended by explicity obtaining the expression for the quantum mechanical state vector PSI which results from particle creation starting from the vacuum during gravitational collapse. We first discuss the quantum field theory of a Hermitian scalar field in an external potential or in a curved but asymptotically flat spacetime with no horizon present. Making the necessary modification for the case when a horizon is present, we apply this theory for a massless Hermitian scalar field to get the state vector describing the steady state emission at late times for particle creation during gravitational collapse to a Schwarzschild black hole. We find that the state vector describing particle creation from the vacuum decomposes into a simple product of state vectors for each individual mode. The density matrix describing emission of particles to infinity by this particle creation process is found to be identical to that of black body emission. Thus, black hole emission agrees in complete detail with black body emission (orig./BJ).

  1. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  2. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  3. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    Science.gov (United States)

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  4. Matrix-Vector Based Fast Fourier Transformations on SDR Architectures

    Directory of Open Access Journals (Sweden)

    Y. He

    2008-05-01

    Full Text Available Today Discrete Fourier Transforms (DFTs are applied in various radio standards based on OFDM (Orthogonal Frequency Division Multiplex. It is important to gain a fast computational speed for the DFT, which is usually achieved by using specialized Fast Fourier Transform (FFT engines. However, in face of the Software Defined Radio (SDR development, more general (parallel processor architectures are often desirable, which are not tailored to FFT computations. Therefore, alternative approaches are required to reduce the complexity of the DFT. Starting from a matrix-vector based description of the FFT idea, we will present different factorizations of the DFT matrix, which allow a reduction of the complexity that lies between the original DFT and the minimum FFT complexity. The computational complexities of these factorizations and their suitability for implementation on different processor architectures are investigated.

  5. Stokes-vector and Mueller-matrix polarimetry [Invited].

    Science.gov (United States)

    Azzam, R M A

    2016-07-01

    This paper reviews the current status of instruments for measuring the full 4×1 Stokes vector S, which describes the state of polarization (SOP) of totally or partially polarized light, and the 4×4 Mueller matrix M, which determines how the SOP is transformed as light interacts with a material sample or an optical element or system. The principle of operation of each instrument is briefly explained by using the Stokes-Mueller calculus. The development of fast, automated, imaging, and spectroscopic instruments over the last 50 years has greatly expanded the range of applications of optical polarimetry and ellipsometry in almost every branch of science and technology. Current challenges and future directions of this important branch of optics are also discussed.

  6. Rotations with Rodrigues' vector

    International Nuclear Information System (INIS)

    Pina, E

    2011-01-01

    The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears to be a fundamental matrix that is used to express the components of the angular velocity, the rotation matrix and the angular momentum vector. The Hamiltonian formalism of rotational dynamics in terms of this vector uses the same matrix. The quantization of the rotational dynamics is performed with simple rules if one uses Rodrigues' vector and similar formal expressions for the quantum operators that mimic the Hamiltonian classical dynamics.

  7. Sparse Frequency Waveform Design for Radar-Embedded Communication

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2016-01-01

    Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.

  8. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    Science.gov (United States)

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  9. Simulating quantum systems on classical computers with matrix product states

    International Nuclear Information System (INIS)

    Kleine, Adrian

    2010-01-01

    In this thesis, the numerical simulation of strongly-interacting many-body quantum-mechanical systems using matrix product states (MPS) is considered. Matrix-Product-States are a novel representation of arbitrary quantum many-body states. Using quantum information theory, it is possible to show that Matrix-Product-States provide a polynomial-sized representation of one-dimensional quantum systems, thus allowing an efficient simulation of one-dimensional quantum system on classical computers. Matrix-Product-States form the conceptual framework of the density-matrix renormalization group (DMRG). After a general introduction in the first chapter of this thesis, the second chapter deals with Matrix-Product-States, focusing on the development of fast and stable algorithms. To obtain algorithms to efficiently calculate ground states, the density-matrix renormalization group is reformulated using the Matrix-Product-States framework. Further, time-dependent problems are considered. Two different algorithms are presented, one based on a Trotter decomposition of the time-evolution operator, the other one on Krylov subspaces. Finally, the evaluation of dynamical spectral functions is discussed, and a correction vector-based method is presented. In the following chapters, the methods presented in the second chapter, are applied to a number of different physical problems. The third chapter deals with the existence of chiral phases in isotropic one-dimensional quantum spin systems. A preceding analytical study based on a mean-field approach indicated the possible existence of those phases in an isotropic Heisenberg model with a frustrating zig-zag interaction and a magnetic field. In this thesis, the existence of the chiral phases is shown numerically by using Matrix-Product-States-based algorithms. In the fourth chapter, we propose an experiment using ultracold atomic gases in optical lattices, which allows a well controlled observation of the spin-charge separation (of

  10. Simulating quantum systems on classical computers with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Kleine, Adrian

    2010-11-08

    In this thesis, the numerical simulation of strongly-interacting many-body quantum-mechanical systems using matrix product states (MPS) is considered. Matrix-Product-States are a novel representation of arbitrary quantum many-body states. Using quantum information theory, it is possible to show that Matrix-Product-States provide a polynomial-sized representation of one-dimensional quantum systems, thus allowing an efficient simulation of one-dimensional quantum system on classical computers. Matrix-Product-States form the conceptual framework of the density-matrix renormalization group (DMRG). After a general introduction in the first chapter of this thesis, the second chapter deals with Matrix-Product-States, focusing on the development of fast and stable algorithms. To obtain algorithms to efficiently calculate ground states, the density-matrix renormalization group is reformulated using the Matrix-Product-States framework. Further, time-dependent problems are considered. Two different algorithms are presented, one based on a Trotter decomposition of the time-evolution operator, the other one on Krylov subspaces. Finally, the evaluation of dynamical spectral functions is discussed, and a correction vector-based method is presented. In the following chapters, the methods presented in the second chapter, are applied to a number of different physical problems. The third chapter deals with the existence of chiral phases in isotropic one-dimensional quantum spin systems. A preceding analytical study based on a mean-field approach indicated the possible existence of those phases in an isotropic Heisenberg model with a frustrating zig-zag interaction and a magnetic field. In this thesis, the existence of the chiral phases is shown numerically by using Matrix-Product-States-based algorithms. In the fourth chapter, we propose an experiment using ultracold atomic gases in optical lattices, which allows a well controlled observation of the spin-charge separation (of

  11. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition

    DEFF Research Database (Denmark)

    Yang, Yukay

    I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....

  12. Space Vector Modulation for an Indirect Matrix Converter with Improved Input Power Factor

    Directory of Open Access Journals (Sweden)

    Nguyen Dinh Tuyen

    2017-04-01

    Full Text Available Pulse width modulation strategies have been developed for indirect matrix converters (IMCs in order to improve their performance. In indirect matrix converters, the LC input filter is used to remove input current harmonics and electromagnetic interference problems. Unfortunately, due to the existence of the input filter, the input power factor is diminished, especially during operation at low voltage outputs. In this paper, a new space vector modulation (SVM is proposed to compensate for the input power factor of the indirect matrix converter. Both computer simulation and experimental studies through hardware implementation were performed to verify the effectiveness of the proposed modulation strategy.

  13. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  14. vSmartMOM: A vector matrix operator method-based radiative transfer model linearized with respect to aerosol properties

    International Nuclear Information System (INIS)

    Sanghavi, Suniti; Davis, Anthony B.; Eldering, Annmarie

    2014-01-01

    In this paper, we build up on the scalar model smartMOM to arrive at a formalism for linearized vector radiative transfer based on the matrix operator method (vSmartMOM). Improvements have been made with respect to smartMOM in that a novel method of computing intensities for the exact viewing geometry (direct raytracing) without interpolation between quadrature points has been implemented. Also, the truncation method employed for dealing with highly peaked phase functions has been changed to a vector adaptation of Wiscombe's delta-m method. These changes enable speedier and more accurate radiative transfer computations by eliminating the need for a large number of quadrature points and coefficients for generalized spherical functions. We verify our forward model against the benchmarking results of Kokhanovsky et al. (2010) [22]. All non-zero Stokes vector elements are found to show agreement up to mostly the seventh significant digit for the Rayleigh atmosphere. Intensity computations for aerosol and cloud show an agreement of well below 0.03% and 0.05% at all viewing angles except around the solar zenith angle (60°), where most radiative models demonstrate larger variances due to the strongly forward-peaked phase function. We have for the first time linearized vector radiative transfer based on the matrix operator method with respect to aerosol optical and microphysical parameters. We demonstrate this linearization by computing Jacobian matrices for all Stokes vector elements for a multi-angular and multispectral measurement setup. We use these Jacobians to compare the aerosol information content of measurements using only the total intensity component against those using the idealized measurements of full Stokes vector [I,Q,U,V] as well as the more practical use of only [I,Q,U]. As expected, we find for the considered example that the accuracy of the retrieved parameters improves when the full Stokes vector is used. The information content for the full Stokes

  15. Bioreactor production of recombinant herpes simplex virus vectors.

    Science.gov (United States)

    Knop, David R; Harrell, Heather

    2007-01-01

    Serotypical application of herpes simplex virus (HSV) vectors to gene therapy (type 1) and prophylactic vaccines (types 1 and 2) has garnered substantial clinical interest recently. HSV vectors and amplicons have also been employed as helper virus constructs for manufacture of the dependovirus adeno-associated virus (AAV). Large quantities of infectious HSV stocks are requisite for these therapeutic applications, requiring a scalable vector manufacturing and processing platform comprised of unit operations which accommodate the fragility of HSV. In this study, production of a replication deficient rHSV-1 vector bearing the rep and cap genes of AAV-2 (denoted rHSV-rep2/cap2) was investigated. Adaptation of rHSV production from T225 flasks to a packed bed, fed-batch bioreactor permitted an 1100-fold increment in total vector production without a decrease in specific vector yield (pfu/cell). The fed-batch bioreactor system afforded a rHSV-rep2/cap2 vector recovery of 2.8 x 10(12) pfu. The recovered vector was concentrated by tangential flow filtration (TFF), permitting vector stocks to be formulated at greater than 1.5 x 10(9) pfu/mL.

  16. Some remarks on a generalized vector product

    OpenAIRE

    ACOSTA-HUMÁNEZ, PRIMITIVO; ARANDA, MOISÉS; NÚÑEZ, REINALDO

    2011-01-01

    Abstract. In this paper we use a generalized vector product to construct an exterior form ⊥ : , where Finally, for n = k - 1 we introduce the reversing operation to study this generalized vector product over palindromic and antipalindromic vectors. Resumen. En este art&íacute;culo usamos un producto vectorial generalizado para construir una forma exterior ⊥ : , en donde como es natural, Finalmente, para n = k - 1 introducimos la operación reversar para estudiar este producto vectorial gene...

  17. Codesign of Beam Pattern and Sparse Frequency Waveforms for MIMO Radar

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2015-01-01

    Full Text Available Multiple-input multiple-output (MIMO radar takes the advantages of high degrees of freedom for beam pattern design and waveform optimization, because each antenna in centralized MIMO radar system can transmit different signal waveforms. When continuous band is divided into several pieces, sparse frequency radar waveforms play an important role due to the special pattern of the sparse spectrum. In this paper, we start from the covariance matrix of the transmitted waveform and extend the concept of sparse frequency design to the study of MIMO radar beam pattern. With this idea in mind, we first solve the problem of semidefinite constraint by optimization tools and get the desired covariance matrix of the ideal beam pattern. Then, we use the acquired covariance matrix and generalize the objective function by adding the constraint of both constant modulus of the signals and corresponding spectrum. Finally, we solve the objective function by the cyclic algorithm and obtain the sparse frequency MIMO radar waveforms with desired beam pattern. The simulation results verify the effectiveness of this method.

  18. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  19. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    OpenAIRE

    Tian, Xinmin; Saito, Hideki; Preis, Serguei V.; Garcia, Eric N.; Kozhukhov, Sergey S.; Masten, Matt; Cherkasov, Aleksei G.; Panchenko, Nikolay

    2015-01-01

    Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A ...

  20. Unintegrated sea quark at small x and vector boson production

    CERN Document Server

    Hautmann, F; Jung, H

    2012-01-01

    Parton-shower event generators that go beyond the collinear-ordering approximation at small x have so far included only gluon and valence quark channels at transverse momentum dependent level. In this contribution we provide a denition of a transverse momentum depend (TMD) sea quark distribution valid in the small x region, which is based on the TMD gluon-to-quark splitting function. As an example process we consider vector boson production in the forward direction of one of the protons. The qq ! Z matrix element (with one o-shell quark) is calculated in an explicit gauge invariant way, making use of high energy factorization.

  1. Encoding of rat working memory by power of multi-channel local field potentials via sparse non-negative matrix factorization

    Institute of Scientific and Technical Information of China (English)

    Xu Liu; Tiao-Tiao Liu; Wen-Wen Bai; Hu Yi; Shuang-Yan Li; Xin Tian

    2013-01-01

    Working memory plays an important role in human cognition.This study investigated how working memory was encoded by the power of multi-channel local field potentials (LFPs) based on sparse nonnegative matrix factorization (SNMF).SNMF was used to extract features from LFPs recorded from the prefrontal cortex of four Sprague-Dawley rats during a memory task in a Y maze,with 10 trials for each rat.Then the power-increased LFP components were selected as working memory-related features and the other components were removed.After that,the inverse operation of SNMF was used to study the encoding of working memory in the timefrequency domain.We demonstrated that theta and gamma power increased significantly during the working memory task.The results suggested that postsynaptic activity was simulated well by the sparse activity model.The theta and gamma bands were meaningful for encoding working memory.

  2. Technical note: Avoiding the direct inversion of the numerator relationship matrix for genotyped animals in single-step genomic best linear unbiased prediction solved with the preconditioned conjugate gradient.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I

    2017-01-01

    This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.

  3. Pair production of intermediate vector bosons

    International Nuclear Information System (INIS)

    Mikaelian, K.O.

    1979-01-01

    The production of intermediate vector boson pairs W + W - , Z 0 Z 0 , W +- Z 0 and W +- γ in pp and p anti p collisions is discussed. The motivation is to detect the self-interactions among the four intermediate vector bosons

  4. Parallel sparse direct solver for integrated circuit simulation

    CERN Document Server

    Chen, Xiaoming; Yang, Huazhong

    2017-01-01

    This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. · Introduces complicated algorithms of sparse linear solvers, using concise principles and simple examples, without complex theory or lengthy derivations; · Describes a parallel sparse direct solver that can be adopted to accelerate any SPICE-like integrated circuit simulato...

  5. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul

    2012-12-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. © 1991-2012 IEEE.

  6. Vector Production in an Academic Environment: A Tool to Assess Production Costs

    Science.gov (United States)

    Boeke, Aaron; Doumas, Patrick; Reeves, Lilith; McClurg, Kyle; Bischof, Daniela; Sego, Lina; Auberry, Alisha; Tatikonda, Mohan

    2013-01-01

    Abstract Generating gene and cell therapy products under good manufacturing practices is a complex process. When determining the cost of these products, researchers must consider the large number of supplies used for manufacturing and the personnel and facility costs to generate vector and maintain a cleanroom facility. To facilitate cost estimates, the Indiana University Vector Production Facility teamed with the Indiana University Kelley School of Business to develop a costing tool that, in turn, provides pricing. The tool is designed in Microsoft Excel and is customizable to meet the needs of other core facilities. It is available from the National Gene Vector Biorepository. The tool allows cost determinations using three different costing methods and was developed in an effort to meet the A21 circular requirements for U.S. core facilities performing work for federally funded projects. The costing tool analysis reveals that the cost of vector production does not have a linear relationship with batch size. For example, increasing the production from 9 to18 liters of a retroviral vector product increases total costs a modest 1.2-fold rather than doubling in total cost. The analysis discussed in this article will help core facilities and investigators plan a cost-effective strategy for gene and cell therapy production. PMID:23360377

  7. The mass spectrum of the Schwinger model with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Cyprus Univ., Nicosia (Cyprus). Dept. of Physics

    2013-07-15

    We show the feasibility of tensor network solutions for lattice gauge theories in Hamiltonian formulation by applying matrix product states algorithms to the Schwinger model with zero and non-vanishing fermion mass. We introduce new techniques to compute excitations in a system with open boundary conditions, and to identify the states corresponding to low momentum and different quantum numbers in the continuum. For the ground state and both the vector and scalar mass gaps in the massive case, the MPS technique attains precisions comparable to the best results available from other techniques.

  8. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  9. A fast algorithm for sparse matrix computations related to inversion

    International Nuclear Information System (INIS)

    Li, S.; Wu, W.; Darve, E.

    2013-01-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G r and G for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  10. A fast algorithm for sparse matrix computations related to inversion

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: lisong@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Wu, W. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Packard Building, Room 268, Stanford, CA 94305 (United States); Darve, E. [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Department of Mechanical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Room 209, Stanford, CA 94305 (United States)

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  11. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    Science.gov (United States)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  12. Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing

    Science.gov (United States)

    Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei

    2018-04-01

    We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.

  13. Iterative solution of general sparse linear systems on clusters of workstations

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Gen-Ching; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    Solving sparse irregularly structured linear systems on parallel platforms poses several challenges. First, sparsity makes it difficult to exploit data locality, whether in a distributed or shared memory environment. A second, perhaps more serious challenge, is to find efficient ways to precondition the system. Preconditioning techniques which have a large degree of parallelism, such as multicolor SSOR, often have a slower rate of convergence than their sequential counterparts. Finally, a number of other computational kernels such as inner products could ruin any gains gained from parallel speed-ups, and this is especially true on workstation clusters where start-up times may be high. In this paper we discuss these issues and report on our experience with PSPARSLIB, an on-going project for building a library of parallel iterative sparse matrix solvers.

  14. The Cross Product of Two Vectors Is Not Just Another Vector--A Major Misconception Being Perpetuated in Calculus and Vector Analysis Textbooks.

    Science.gov (United States)

    Elk, Seymour B.

    1997-01-01

    Suggests that the cross product of two vectors can be more easily and accurately explained by starting from the perspective of dyadics because then the concept of vector multiplication has a simple geometrical picture that encompasses both the dot and cross products in any number of dimensions in terms of orthogonal unit vector components. (AIM)

  15. Rotational image deblurring with sparse matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Nagy, James G.; Tigkos, Konstantinos

    2014-01-01

    We describe iterative deblurring algorithms that can handle blur caused by a rotation along an arbitrary axis (including the common case of pure rotation). Our algorithms use a sparse-matrix representation of the blurring operation, which allows us to easily handle several different boundary...

  16. Parallelized preconditioned model building algorithm for matrix factorization

    OpenAIRE

    Kaya, Kamer; Birbil, İlker; Birbil, Ilker; Öztürk, Mehmet Kaan; Ozturk, Mehmet Kaan; Gohari, Amir

    2017-01-01

    Matrix factorization is a common task underlying several machine learning applications such as recommender systems, topic modeling, or compressed sensing. Given a large and possibly sparse matrix A, we seek two smaller matrices W and H such that their product is as close to A as possible. The objective is minimizing the sum of square errors in the approximation. Typically such problems involve hundreds of thousands of unknowns, so an optimizer must be exceptionally efficient. In this study, a...

  17. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  18. Amino acid "little Big Bang": representing amino acid substitution matrices as dot products of Euclidian vectors.

    Science.gov (United States)

    Zimmermann, Karel; Gibrat, Jean-François

    2010-01-04

    Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  19. Amino acid "little Big Bang": Representing amino acid substitution matrices as dot products of Euclidian vectors

    Directory of Open Access Journals (Sweden)

    Zimmermann Karel

    2010-01-01

    Full Text Available Abstract Background Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. Results We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. Conclusions This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.

  20. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)

    2015-11-30

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.

  1. Sparse linear systems: Theory of decomposition, methods, technology, applications and implementation in Wolfram Mathematica

    International Nuclear Information System (INIS)

    Pilipchuk, L. A.; Pilipchuk, A. S.

    2015-01-01

    In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure

  2. Simulation of sparse matrix array designs

    Science.gov (United States)

    Boehm, Rainer; Heckel, Thomas

    2018-04-01

    Matrix phased array probes are becoming more prominently used in industrial applications. The main drawbacks, using probes incorporating a very large number of transducer elements, are needed for an appropriate cabling and an ultrasonic device offering many parallel channels. Matrix arrays designed for extended functionality feature at least 64 or more elements. Typical arrangements are square matrices, e.g., 8 by 8 or 11 by 11 or rectangular matrixes, e.g., 8 by 16 or 10 by 12 to fit a 128-channel phased array system. In some phased array systems, the number of simultaneous active elements is limited to a certain number, e.g., 32 or 64. Those setups do not allow running the probe with all elements active, which may cause a significant change in the directivity pattern of the resulting sound beam. When only a subset of elements can be used during a single acquisition, different strategies may be applied to collect enough data for rebuilding the missing information from the echo signal. Omission of certain elements may be one approach, overlay of subsequent shots with different active areas may be another one. This paper presents the influence of a decreased number of active elements on the sound field and their distribution on the array. Solutions using subsets with different element activity patterns on matrix arrays and their advantages and disadvantages concerning the sound field are evaluated using semi-analytical simulation tools. Sound field criteria are discussed, which are significant for non-destructive testing results and for the system setup.

  3. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    Science.gov (United States)

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  4. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    Science.gov (United States)

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  5. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    Directory of Open Access Journals (Sweden)

    Xin Tang

    Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our

  6. Single vector leptoquark production in e+e- and γe colliders

    International Nuclear Information System (INIS)

    Aliev, T.M.; Iltan, E.; Pak, N.K.

    1996-01-01

    We consider the single vector leptoquark (LQ) production at e + e - and γe colliders for two values of the center-of-mass energy √s=500GeV and √s=1000 GeV, in a model-independent framework. We find that the cross sections for the single gauge and nongauge vector LQ productions are almost equal. The discovery limit for a single vector LQ production is obtained for both cases. It is shown that in e + e - collisions the single vector LQ production is more favorable than the vector LQ pair production, if the Yukawa coupling constant is κ∼1. copyright 1996 The American Physical Society

  7. Conformal Vector Fields on Doubly Warped Product Manifolds and Applications

    Directory of Open Access Journals (Sweden)

    H. K. El-Sayied

    2016-01-01

    Full Text Available This article aimed to study and explore conformal vector fields on doubly warped product manifolds as well as on doubly warped spacetime. Then we derive sufficient conditions for matter and Ricci collineations on doubly warped product manifolds. A special attention is paid to concurrent vector fields. Finally, Ricci solitons on doubly warped product spacetime admitting conformal vector fields are considered.

  8. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  9. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    Science.gov (United States)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  10. Bound on the estimation grid size for sparse reconstruction in direction of arrival estimation

    NARCIS (Netherlands)

    Coutiño Minguez, M.A.; Pribic, R; Leus, G.J.T.

    2016-01-01

    A bound for sparse reconstruction involving both the signal-to-noise ratio (SNR) and the estimation grid size is presented. The bound is illustrated for the case of a uniform linear array (ULA). By reducing the number of possible sparse vectors present in the feasible set of a constrained ℓ1-norm

  11. Off-diagonal helicity density matrix elements for vector mesons produced at LEP

    International Nuclear Information System (INIS)

    Anselmino, M.; Bertini, M.; Quintairos, P.

    1997-05-01

    Final state q q-bar interactions may give origin to non zero values of the off-diagonal element ρ 1 of the helicity density matrix of vector mesons produced in e + e - annihilations, as confirmed by recent OPAL data on φ and D * 's. Predictions are given for ρ1,-1 of several mesons produced at large z and small PT, collinear with the parent jet; the values obtained for θ and D * are in agreement with data. (author)

  12. EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations

    International Nuclear Information System (INIS)

    Garbow, Burton S.; Cline, A.K.; Meyering, J.

    1993-01-01

    1 - Description of problem or function: EISPACK3 is a collection of 75 FORTRAN subroutines, both single- and double-precision, that compute the eigenvalues and eigenvectors of nine classes of matrices. The package can determine the Eigen-system of complex general, complex Hermitian, real general, real symmetric, real symmetric band, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition, there are two routines which use the singular value decomposition to solve certain least squares problem. The individual subroutines are - Identification/Description: BAKVEC: Back transform vectors of matrix formed by FIGI; BALANC: Balance a real general matrix; BALBAK: Back transform vectors of matrix formed by BALANC; BANDR: Reduce sym. band matrix to sym. tridiag. matrix; BANDV: Find some vectors of sym. band matrix; BISECT: Find some values of sym. tridiag. matrix; BQR: Find some values of sym. band matrix; CBABK2: Back transform vectors of matrix formed by CBAL; CBAL: Balance a complex general matrix; CDIV: Perform division of two complex quantities; CG: Driver subroutine for a complex general matrix; CH: Driver subroutine for a complex Hermitian matrix; CINVIT: Find some vectors of complex Hess. matrix; COMBAK: Back transform vectors of matrix formed by COMHES; COMHES: Reduce complex matrix to complex Hess. (elementary); COMLR: Find all values of complex Hess. matrix (LR); COMLR2: Find all values/vectors of cmplx Hess. matrix (LR); CCMQR: Find all values of complex Hessenberg matrix (QR); COMQR2: Find all values/vectors of cmplx Hess. matrix (QR); CORTB: Back transform vectors of matrix formed by CORTH; CORTH: Reduce complex matrix to complex Hess. (unitary); CSROOT: Find square root of complex quantity; ELMBAK: Back transform vectors of matrix formed by ELMHES; ELMHES: Reduce real matrix to real Hess. (elementary); ELTRAN: Accumulate transformations from ELMHES (for HQR2); EPSLON: Estimate unit roundoff

  13. Music Signal Processing Using Vector Product Neural Networks

    Science.gov (United States)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  14. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  15. Removal of envelope protein-free retroviral vectors by anion-exchange chromatography to improve product quality.

    Science.gov (United States)

    Rodrigues, Teresa; Alves, Ana; Lopes, António; Carrondo, Manuel J T; Alves, Paula M; Cruz, Pedro E

    2008-10-01

    We have investigated the role of the retroviral lipid bilayer and envelope proteins in the adsorption of retroviral vectors (RVs) to a Fractogel DEAE matrix. Intact RVs and their degradation components (envelope protein-free vectors and solubilized vector components) were adsorbed to this matrix and eluted using a linear gradient. Envelope protein-free RVs (Env(-)) and soluble envelope proteins (gp70) eluted in a significantly lower range of conductivities than intact RVs (Env(+)) (13.7-30 mS/cm for Env(-) and gp70 proteins vs. 47-80 mS/cm for Env(+)). The zeta (zeta)-potential of Env(+) and Env(-) vectors was evaluated showing that envelope proteins define the pI of the viral particles (pI (Env(+)) improvement to the quality of retroviral preparations for gene therapy applications.

  16. Chord Recognition Based on Temporal Correlation Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Zhongyang Rao

    2016-05-01

    Full Text Available In this paper, we propose a method called temporal correlation support vector machine (TCSVM for automatic major-minor chord recognition in audio music. We first use robust principal component analysis to separate the singing voice from the music to reduce the influence of the singing voice and consider the temporal correlations of the chord features. Using robust principal component analysis, we expect the low-rank component of the spectrogram matrix to contain the musical accompaniment and the sparse component to contain the vocal signals. Then, we extract a new logarithmic pitch class profile (LPCP feature called enhanced LPCP from the low-rank part. To exploit the temporal correlation among the LPCP features of chords, we propose an improved support vector machine algorithm called TCSVM. We perform this study using the MIREX’09 (Music Information Retrieval Evaluation eXchange Audio Chord Estimation dataset. Furthermore, we conduct comprehensive experiments using different pitch class profile feature vectors to examine the performance of TCSVM. The results of our method are comparable to the state-of-the-art methods that entered the MIREX in 2013 and 2014 for the MIREX’09 Audio Chord Estimation task dataset.

  17. Orbit Classification of Qutrit via the Gram Matrix

    International Nuclear Information System (INIS)

    Tay, B. A.; Zainuddin, Hishamuddin

    2008-01-01

    We classify the orbits generated by unitary transformation on the density matrices of the three-state quantum systems (qutrits) via the Gram matrix. The Gram matrix is a real symmetric matrix formed from the Hilbert–Schmidt scalar products of the vectors lying in the tangent space to the orbits. The rank of the Gram matrix determines the dimensions of the orbits, which fall into three classes for qutrits. (general)

  18. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  19. Interference-Aware OFDM Receiver for Channels with Sparse Common Supports

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Manchón, Carles Navarro; Badiu, Mihai Alin

    2017-01-01

    We design an algorithm for OFDM receivers operating in co-channel interference conditions, where the serving and interfering transmitters are synchronized in time. The channel estimation problem is formulated as one of sparse signal reconstruction using multiple measurement vectors. The proposed...

  20. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  1. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  2. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  3. Production of heavy sterile neutrinos from vector boson decay at electroweak temperatures

    Science.gov (United States)

    Lello, Louis; Boyanovsky, Daniel; Pisarski, Robert D.

    2017-02-01

    In the standard model extended with a seesaw mass matrix, we study the production of sterile neutrinos from the decay of vector bosons at temperatures near the masses of the electroweak bosons. We derive a general quantum kinetic equation for the production of sterile neutrinos and their effective mixing angles, which is applicable over a wide range of temperature, to all orders in interactions of the standard model and to leading order in a small mixing angle for the neutrinos. We emphasize the relation between the production rate and Landau damping at one-loop order and show that production rates and effective mixing angles depend sensitively upon the neutrino's helicity. Sterile neutrinos with positive helicity interact more weakly with the medium than those with negative helicity, and their effective mixing angle is not modified significantly. Negative helicity states couple more strongly to the vector bosons, but their mixing angle is strongly suppressed by the medium. Consequently, if the mass of the sterile neutrino is ≲8.35 MeV , there are fewer states with negative helicity produced than those with positive helicity. There is an Mikheyev-Smirnov-Wolfenstein-type resonance in the absence of lepton asymmetry, but due to screening by the damping rate, the production rate is not enhanced. Sterile neutrinos with negative helicity freeze out at Tf-≃5 GeV , whereas positive helicity neutrinos freeze out at Tf+≃8 GeV , with both distributions far from thermal. As the temperature decreases, due to competition between a decreasing production rate and an increasing mixing angle, the distribution function for states with negative helicity is broader in momentum and hotter than that for those with positive helicity. Sterile neutrinos produced via vector boson decay do not satisfy the abundance, lifetime, and cosmological constraints to be the sole dark matter component in the Universe. Massive sterile neutrinos produced via vector boson decay might solve the 7Li

  4. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  5. M3: Matrix Multiplication on MapReduce

    DEFF Research Database (Denmark)

    Silvestri, Francesco; Ceccarello, Matteo

    2015-01-01

    M3 is an Hadoop library for performing dense and sparse matrix multiplication in MapReduce. The library is based on multi-round algorithms exploiting the 3D decomposition of the problem.......M3 is an Hadoop library for performing dense and sparse matrix multiplication in MapReduce. The library is based on multi-round algorithms exploiting the 3D decomposition of the problem....

  6. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  7. A Study on GPU Computing of Bi-conjugate Gradient Method for Finite Element Analysis of the Incompressible Navier-Stokes Equations

    International Nuclear Information System (INIS)

    Yoon, Jong Seon; Choi, Hyoung Gwon; Jeon, Byoung Jin; Jung, Hye Dong

    2016-01-01

    A parallel algorithm of bi-conjugate gradient method was developed based on CUDA for parallel computation of the incompressible Navier-Stokes equations. The governing equations were discretized using splitting P2P1 finite element method. Asymmetric stenotic flow problem was solved to validate the proposed algorithm, and then the parallel performance of the GPU was examined by measuring the elapsed times. Further, the GPU performance for sparse matrix-vector multiplication was also investigated with a matrix of fluid-structure interaction problem. A kernel was generated to simultaneously compute the inner product of each row of sparse matrix and a vector. In addition, the kernel was optimized to improve the performance by using both parallel reduction and memory coalescing. In the kernel construction, the effect of warp on the parallel performance of the present CUDA was also examined. The present GPU computation was more than 7 times faster than the single CPU by double precision.

  8. A Study on GPU Computing of Bi-conjugate Gradient Method for Finite Element Analysis of the Incompressible Navier-Stokes Equations

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of); Jung, Hye Dong [Korea Electronics Technology Institute, Seongnam (Korea, Republic of)

    2016-09-15

    A parallel algorithm of bi-conjugate gradient method was developed based on CUDA for parallel computation of the incompressible Navier-Stokes equations. The governing equations were discretized using splitting P2P1 finite element method. Asymmetric stenotic flow problem was solved to validate the proposed algorithm, and then the parallel performance of the GPU was examined by measuring the elapsed times. Further, the GPU performance for sparse matrix-vector multiplication was also investigated with a matrix of fluid-structure interaction problem. A kernel was generated to simultaneously compute the inner product of each row of sparse matrix and a vector. In addition, the kernel was optimized to improve the performance by using both parallel reduction and memory coalescing. In the kernel construction, the effect of warp on the parallel performance of the present CUDA was also examined. The present GPU computation was more than 7 times faster than the single CPU by double precision.

  9. Fault Diagnosis of Complex Industrial Process Using KICA and Sparse SVM

    Directory of Open Access Journals (Sweden)

    Jie Xu

    2013-01-01

    Full Text Available New approaches are proposed for complex industrial process monitoring and fault diagnosis based on kernel independent component analysis (KICA and sparse support vector machine (SVM. The KICA method is a two-phase algorithm: whitened kernel principal component analysis (KPCA. The data are firstly mapped into high-dimensional feature subspace. Then, the ICA algorithm seeks the projection directions in the KPCA whitened space. Performance monitoring is implemented through constructing the statistical index and control limit in the feature space. If the statistical indexes exceed the predefined control limit, a fault may have occurred. Then, the nonlinear score vectors are calculated and fed into the sparse SVM to identify the faults. The proposed method is applied to the simulation of Tennessee Eastman (TE chemical process. The simulation results show that the proposed method can identify various types of faults accurately and rapidly.

  10. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  11. Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals

    KAUST Repository

    Cho, Myung

    2016-06-24

    We propose fast algorithms that speed up or improve the performance of recovering spectrally sparse signals from un-derdetermined measurements. Our algorithms are based on a non-convex approach of using alternating projected gradient descent for structured matrix recovery. We apply this approach to two formulations of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix, and Hankel structured matrix. Our methods provide better recovery performance, and faster signal recovery than existing algorithms, including atomic norm minimization.

  12. Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals

    KAUST Repository

    Cho, Myung; Cai, Jian-Feng; Liu, Suhui; Eldar, Yonina C.; Xu, Weiyu

    2016-01-01

    We propose fast algorithms that speed up or improve the performance of recovering spectrally sparse signals from un-derdetermined measurements. Our algorithms are based on a non-convex approach of using alternating projected gradient descent for structured matrix recovery. We apply this approach to two formulations of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix, and Hankel structured matrix. Our methods provide better recovery performance, and faster signal recovery than existing algorithms, including atomic norm minimization.

  13. P-SPARSLIB: A parallel sparse iterative solution package

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  14. Enforced Sparse Non-Negative Matrix Factorization

    Science.gov (United States)

    2016-01-23

    proposals quotas opec legislation revenue england ico iraq vote passenger yen producer iranian surplus Figure 4. Example NMF with and without sparsity...preprint arXiv:1007.0380, 2010. [22] A. Cichocki and P. Anh-Huy, “Fast local algorithms for large scale nonnegative matrix and tensor factorizations

  15. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  16. Common-Mode Voltage Reduction of Three-to-Five Phase Indirect Matrix Converters with Zero-Current Vector Modulation

    DEFF Research Database (Denmark)

    Zhang, Guanguan; Yang, Jian; Yang, Yongheng

    2017-01-01

    In order to reduce the Common-Mode Voltage (CMV) in three-to-five phase indirect matrix converters, three improved Space Vector Pulse Width Modulation (SVPWM) methods are proposed and discussed. The improved modulation schemes are achieved by reorganizing zero vectors from the inversion stage......) in the inversion stage, which results in a large amount of third-order harmonics in output currents. In addition, the method that utilizes two adjacent active current vectors (ACVs) and the method that uses two non-adjacent ACVs in the rectification stage have the same CMV peak value. By contrast, the latter...... achieves a lower Total Harmonic Distortion (THD) level of the output currents. Simulation results verify the effectiveness of the proposed methods....

  17. Image Jacobian Matrix Estimation Based on Online Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Shangqin Mao

    2012-10-01

    Full Text Available Research into robotics visual servoing is an important area in the field of robotics. It has proven difficult to achieve successful results for machine vision and robotics in unstructured environments without using any a priori camera or kinematic models. In uncalibrated visual servoing, image Jacobian matrix estimation methods can be divided into two groups: the online method and the offline method. The offline method is not appropriate for most natural environments. The online method is robust but rough. Moreover, if the images feature configuration changes, it needs to restart the approximating procedure. A novel approach based on an online support vector regression (OL-SVR algorithm is proposed which overcomes the drawbacks and combines the virtues just mentioned.

  18. Implementation in graphic form of an observability algorithm in energy network using sparse vectors; Implementacao, em ambiente grafico, de um algoritmo de observabilidade em redes de energia utilizando vetores esparsos

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Claudio Eduardo Scriptori de

    1996-02-01

    In the Operating Center of Electrical Energy System has been every time more and more important the understanding of the difficulties related to the electrical power behavior. In order to have adequate operation of the system the state estimation process is very important. However before performing the system state estimation owe needs to know if the system is observable otherwise the estimation will no be possible. This work has a main objective, to develop a software that allows one to visualize the whole network in case that network is observable or the observable island of the entire network. As theoretical background the theories and algorithm using the triangular factorization of gain matrix as well as the concepts contained on factorization path developed by Bretas et alli were used. The algorithm developed by him was adapted to the Windows graphical form so that the numerical results of the network observability are shown in the computer screen in graphical form. This algorithm is essentially instead of numerical as the one based on the factorization of gain matrix only. To implement the algorithm previously referred it was used the Borland C++ compiler for windows version 4.0 due to the facilities for sources generation it presents. The results of the tests in the network with 6, 14 and 30 bus leads to: (1) the simplification of observability analysis, using sparse vectors and triangular factorization of the gain matrix; (2) the behavior similarity of the three testes systems with effective clues that the routine developed works well for any systems mainly for systems with bigger quantities of bus and lines; (3) the alternative way of presenting numerical results using the program developed here in graphical forms. (author)

  19. Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Chow, Edmond; Pothen, Alex

    2005-03-18

    This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.

  20. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  1. Generation of a non-transmissive Borna disease virus vector lacking both matrix and glycoprotein genes.

    Science.gov (United States)

    Fujino, Kan; Yamamoto, Yusuke; Daito, Takuji; Makino, Akiko; Honda, Tomoyuki; Tomonaga, Keizo

    2017-09-01

    Borna disease virus (BoDV), a prototype of mammalian bornavirus, is a non-segmented, negative strand RNA virus that often causes severe neurological disorders in infected animals, including horses and sheep. Unique among animal RNA viruses, BoDV transcribes and replicates non-cytopathically in the cell nucleus, leading to establishment of long-lasting persistent infection. This striking feature of BoDV indicates its potential as an RNA virus vector system. It has previously been demonstrated by our team that recombinant BoDV (rBoDV) lacking an envelope glycoprotein (G) gene develops persistent infections in transduced cells without loss of the viral genome. In this study, a novel non-transmissive rBoDV, rBoDV ΔMG, which lacks both matrix (M) and G genes in the genome, is reported. rBoDV-ΔMG expressing green fluorescence protein (GFP), rBoDV ΔMG-GFP, was efficiently generated in Vero/MG cells stably expressing both BoDV M and G proteins. Infection with rBoDV ΔMG-GFP was persistently maintained in the parent Vero cells without propagation within cell culture. The optimal ratio of M and G for efficient viral particle production by transient transfection of M and G expression plasmids into cells persistently infected with rBoDV ΔMG-GFP was also demonstrated. These findings indicate that the rBoDV ΔMG-based BoDV vector may provide an extremely safe virus vector system and could be a novel strategy for investigating the function of M and G proteins and the host range of bornaviruses. © 2017 The Societies and John Wiley & Sons Australia, Ltd.

  2. A performance study of sparse Cholesky factorization on INTEL iPSC/860

    Science.gov (United States)

    Zubair, M.; Ghose, M.

    1992-01-01

    The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.

  3. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5

  4. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    International Nuclear Information System (INIS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-01-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10

  5. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    Directory of Open Access Journals (Sweden)

    Xinmin Tian

    2015-01-01

    Full Text Available Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A set of workloads from several application domains is employed to conduct the performance study of our SIMD vectorization techniques. The performance results show that we achieved up to 12.5x performance gain on the Intel Xeon Phi coprocessor. We also demonstrate a 2000x performance speedup from the seamless integration of SIMD vectorization and parallelization.

  6. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  7. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2017-01-01

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  8. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit

    2017-04-05

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  9. Entanglement in Gaussian matrix-product states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Ericsson, Marie

    2006-01-01

    Gaussian matrix-product states are obtained as the outputs of projection operations from an ancillary space of M infinitely entangled bonds connecting neighboring sites, applied at each of N sites of a harmonic chain. Replacing the projections by associated Gaussian states, the building blocks, we show that the entanglement range in translationally invariant Gaussian matrix-product states depends on how entangled the building blocks are. In particular, infinite entanglement in the building blocks produces fully symmetric Gaussian states with maximum entanglement range. From their peculiar properties of entanglement sharing, a basic difference with spin chains is revealed: Gaussian matrix-product states can possess unlimited, long-range entanglement even with minimum number of ancillary bonds (M=1). Finally we discuss how these states can be experimentally engineered from N copies of a three-mode building block and N two-mode finitely squeezed states

  10. Simplified lentivirus vector production in protein-free media using polyethylenimine-mediated transfection.

    Science.gov (United States)

    Kuroda, Hitoshi; Kutner, Robert H; Bazan, Nicolas G; Reiser, Jakob

    2009-05-01

    During the past 12 years, lentiviral vectors have emerged as valuable tools for transgene delivery because of their ability to transduce nondividing cells and their capacity to sustain long-term transgene expression. Despite significant progress, the production of high-titer high-quality lentiviral vectors is cumbersome and costly. The most commonly used method to produce lentiviral vectors involves transient transfection using calcium phosphate (CaP)-mediated precipitation of plasmid DNAs. However, inconsistencies in pH can cause significant batch-to-batch variations in lentiviral vector titers, making this method unreliable. This study describes optimized protocols for lentiviral vector production based on polyethylenimine (PEI)-mediated transfection, resulting in more consistent lentiviral vector stocks. To achieve this goal, simple production methods for high-titer lentiviral vector production involving transfection of HEK 293T cells immediately after plating were developed. Importantly, high titers were obtained with cell culture media lacking serum or other protein additives altogether. As a consequence, large-scale lentiviral vector stocks can now be generated with fewer batch-to-batch variations and at reduced costs and with less labor compared to the standard protocols.

  11. Reduced Order Extended Luenberger Observer Based Sensorless Vector Control Fed by Matrix Converter with Non-linearity Modeling

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2004-01-01

    This paper presents a new sensorless vector control system for high performance induction motor drives fed by a matrix converter with non-linearity compensation. The nonlinear voltage distortion that is caused by commutation delay and on-state voltage drop in switching device is corrected by a new...

  12. Sparse logistic principal components analysis for binary data

    KAUST Repository

    Lee, Seokho

    2010-09-01

    We develop a new principal components analysis (PCA) type dimension reduction method for binary data. Different from the standard PCA which is defined on the observed data, the proposed PCA is defined on the logit transform of the success probabilities of the binary observations. Sparsity is introduced to the principal component (PC) loading vectors for enhanced interpretability and more stable extraction of the principal components. Our sparse PCA is formulated as solving an optimization problem with a criterion function motivated from a penalized Bernoulli likelihood. A Majorization-Minimization algorithm is developed to efficiently solve the optimization problem. The effectiveness of the proposed sparse logistic PCA method is illustrated by application to a single nucleotide polymorphism data set and a simulation study. © Institute ol Mathematical Statistics, 2010.

  13. Absence of particle production and factorization of the s-matrix in 1 + 1 dimensional models

    International Nuclear Information System (INIS)

    Parke, S.

    1980-01-01

    In massive, 1 + 1 dimensional, local, quantum field theories the existence of two conserved charges is shown to be a sufficient condition for the absence of particle production and factorization of the s-matrix. These charges must commute and be integrals of local current densities. Their transformation properties under the Lorentz group must be different and also different from the transformation properties under the Lorentz group must be different and also different from the transformation properties pf a vector or a scalar. Also, they must not annihilate any single-particle momentum eigenstate. (orig.)

  14. Modeling of Isomerization of C8 Aromatics by Online Least Squares Support Vector Machine%在线最小二乘支持向量机及其在C8芳烃异构化建模中的应用

    Institute of Scientific and Technical Information of China (English)

    李丽娟; 苏宏业; 褚建

    2009-01-01

    Trie least squares support vector regression (LS-SVR) is usually used for the modeling of single output system, but it is not well suitable for the actual multi-input-multi-output system. The paper aims at the modeling of multi-output systems by LS-SVR. The multi-output LS-SVR is derived in detail. To avoid the inversion of large matrix, the recursive algorithm of the parameters is given, which makes the online algorithm of LS-SVR practical. Since the computing time increases with the number of training samples, the sparseness is studied based on the pro jection of online LS-SVR. The residual of projection less than a threshold is omitted, so that a lot of samples are kept out of the training set and the sparseness is obtained. The standard LS-SVR, nonsparse online LS-SVR and sparse online LS-SVR with different threshold are used for modeling the isomerization of Cj aromatics. The root-mean-square-error (RMSE), number of support vectors and running time of three algorithms are compared and the result indicates that the performance of sparse online LS-SVR is more favorable.

  15. A preconditioned inexact newton method for nonlinear sparse electromagnetic imaging

    KAUST Repository

    Desmal, Abdulla

    2015-03-01

    A nonlinear inversion scheme for the electromagnetic microwave imaging of domains with sparse content is proposed. Scattering equations are constructed using a contrast-source (CS) formulation. The proposed method uses an inexact Newton (IN) scheme to tackle the nonlinearity of these equations. At every IN iteration, a system of equations, which involves the Frechet derivative (FD) matrix of the CS operator, is solved for the IN step. A sparsity constraint is enforced on the solution via thresholded Landweber iterations, and the convergence is significantly increased using a preconditioner that levels the FD matrix\\'s singular values associated with contrast and equivalent currents. To increase the accuracy, the weight of the regularization\\'s penalty term is reduced during the IN iterations consistently with the scheme\\'s quadratic convergence. At the end of each IN iteration, an additional thresholding, which removes small \\'ripples\\' that are produced by the IN step, is applied to maintain the solution\\'s sparsity. Numerical results demonstrate the applicability of the proposed method in recovering sparse and discontinuous dielectric profiles with high contrast values.

  16. On efficient randomized algorithms for finding the PageRank vector

    Science.gov (United States)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  17. Vector boson plus one jet production in POWHEG

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); INFN, Sezione Milano-Bicocca, Milan (Italy); Nason, Paolo [INFN, Sezione Milano-Bicocca, Milan (Italy); Oleari, Carlo [Milano-Bicocca Univ. (Italy); INFN, Sezione Milano-Bicocca, Milan (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; INFN, Sezione Milano-Bicocca, Milan (Italy)

    2010-09-15

    We present an implementation of the next-to-leading order vector boson plus one jet production process in hadronic collision in the framework of POWHEG, which is a method to implement NLO calculations within a Shower Monte Carlo context. All spin correlations in the vector boson decay products have been taken into account. The process has been implemented in the framework of the POWHEG BOX, an automated computer code for turning a NLO calculation into a shower Monte Carlo program. We present phenomenological results for the case of the Z/{gamma} plus one jet production process, obtained by matching the POWHEG calculation with the shower performed by PYTHIA, for the LHC, and we compare our results with available Tevatron data. (orig.)

  18. Vector boson plus one jet production in POWHEG

    International Nuclear Information System (INIS)

    Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele

    2010-09-01

    We present an implementation of the next-to-leading order vector boson plus one jet production process in hadronic collision in the framework of POWHEG, which is a method to implement NLO calculations within a Shower Monte Carlo context. All spin correlations in the vector boson decay products have been taken into account. The process has been implemented in the framework of the POWHEG BOX, an automated computer code for turning a NLO calculation into a shower Monte Carlo program. We present phenomenological results for the case of the Z/γ plus one jet production process, obtained by matching the POWHEG calculation with the shower performed by PYTHIA, for the LHC, and we compare our results with available Tevatron data. (orig.)

  19. OFDM receiver for fast time-varying channels using block-sparse Bayesian learning

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Manchón, Carles Navarro; Rom, Christian

    2016-01-01

    characterized with a basis expansion model using a small number of terms. As a result, the channel estimation problem is posed as that of estimating a vector of complex coefficients that exhibits a block-sparse structure, which we solve with tools from block-sparse Bayesian learning. Using variational Bayesian...... inference, we embed the channel estimator in a receiver structure that performs iterative channel and noise precision estimation, intercarrier interference cancellation, detection and decoding. Simulation results illustrate the superior performance of the proposed receiver over state-of-art receivers....

  20. Molecular design for recombinant adeno-associated virus (rAAV) vector production.

    Science.gov (United States)

    Aponte-Ubillus, Juan Jose; Barajas, Daniel; Peltier, Joseph; Bardliving, Cameron; Shamlou, Parviz; Gold, Daniel

    2018-02-01

    Recombinant adeno-associated virus (rAAV) vectors are increasingly popular tools for gene therapy applications. Their non-pathogenic status, low inflammatory potential, availability of viral serotypes with different tissue tropisms, and prospective long-lasting gene expression are important attributes that make rAAVs safe and efficient therapeutic options. Over the last three decades, several groups have engineered recombinant AAV-producing platforms, yielding high titers of transducing vector particles. Current specific productivity yields from different platforms range from 10 3 to 10 5 vector genomes (vg) per cell, and there is an ongoing effort to improve vector yields in order to satisfy high product demands required for clinical trials and future commercialization.Crucial aspects of vector production include the molecular design of the rAAV-producing host cell line along with the design of AAV genes, promoters, and regulatory elements. Appropriately, configuring and balancing the expression of these elements not only contributes toward high productivity, it also improves process robustness and product quality. In this mini-review, the rational design of rAAV-producing expression systems is discussed, with special attention to molecular strategies that contribute to high-yielding, biomanufacturing-amenable rAAV production processes. Details on molecular optimization from four rAAV expression systems are covered: adenovirus, herpesvirus, and baculovirus complementation systems, as well as a recently explored yeast expression system.

  1. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  2. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  3. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  4. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  5. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2018-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  6. A sparse-grid isogeometric solver

    KAUST Repository

    Beck, Joakim

    2018-02-28

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90’s in the context of the approximation of high-dimensional PDEs.The tests that we report show that, in accordance to the literature, a sparse-grid construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  7. KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-11

    KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA\\'s standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.

  8. BJUT at TREC 2015 Microblog Track: Real-Time Filtering Using Non-negative Matrix Factorization

    Science.gov (United States)

    2015-11-20

    query accurate ambiguity intergration Tweets Vector Preprocessing W-d matrix Feature vector Similarity ranking Recommended twittres Get...recommendation tech- nique based on product category attributes[J]. Expert Systems with Applications, 2009, 36(9): 11480-11488. [5] Sobecki J, Babiak E,Sanina M

  9. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  10. Variational optimization algorithms for uniform matrix product states

    Science.gov (United States)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  11. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  12. SPATIAL-SPECTRAL CLASSIFICATION BASED ON THE UNSUPERVISED CONVOLUTIONAL SPARSE AUTO-ENCODER FOR HYPERSPECTRAL REMOTE SENSING IMAGERY

    Directory of Open Access Journals (Sweden)

    X. Han

    2016-06-01

    Full Text Available Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE. Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI datasets – Pavia University dataset and the Kennedy Space Centre (KSC dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  13. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  14. Sparse coding reveals greater functional connectivity in female brains during naturalistic emotional experience.

    Directory of Open Access Journals (Sweden)

    Yudan Ren

    Full Text Available Functional neuroimaging is widely used to examine changes in brain function associated with age, gender or neuropsychiatric conditions. FMRI (functional magnetic resonance imaging studies employ either laboratory-designed tasks that engage the brain with abstracted and repeated stimuli, or resting state paradigms with little behavioral constraint. Recently, novel neuroimaging paradigms using naturalistic stimuli are gaining increasing attraction, as they offer an ecologically-valid condition to approximate brain function in real life. Wider application of naturalistic paradigms in exploring individual differences in brain function, however, awaits further advances in statistical methods for modeling dynamic and complex dataset. Here, we developed a novel data-driven strategy that employs group sparse representation to assess gender differences in brain responses during naturalistic emotional experience. Comparing to independent component analysis (ICA, sparse coding algorithm considers the intrinsic sparsity of neural coding and thus could be more suitable in modeling dynamic whole-brain fMRI signals. An online dictionary learning and sparse coding algorithm was applied to the aggregated fMRI signals from both groups, which was subsequently factorized into a common time series signal dictionary matrix and the associated weight coefficient matrix. Our results demonstrate that group sparse representation can effectively identify gender differences in functional brain network during natural viewing, with improved sensitivity and reliability over ICA-based method. Group sparse representation hence offers a superior data-driven strategy for examining brain function during naturalistic conditions, with great potential for clinical application in neuropsychiatric disorders.

  15. Study of The Vector Product using Three Dimensions Vector Card of Engineering in Pathumwan Institute of Technology

    Science.gov (United States)

    Mueanploy, Wannapa

    2015-06-01

    The objective of this research was to offer the way to improve engineering students in Physics topic of vector product. The sampling of this research was the engineering students at Pathumwan Institute of Technology during the first semester of academic year 2013. 1) Select 120 students by random sampling are asked to fill in a satisfaction questionnaire scale, to select size of three dimensions vector card in order to apply in the classroom. 2) Select 60 students by random sampling to do achievement test and take the test to be used in the classroom. The methods used in analysis of achievement test by the Kuder-Richardson Method (KR- 20). The results show that 12 items of achievement test are appropriate to be applied in the classroom. The achievement test gets Difficulty (P) = 0.40-0.67, Discrimination = 0.33-0.73 and Reliability (r) = 0.70.The experimental in the classroom. 3) Select 60 students by random sampling divide into two groups; group one (the controlled group) with 30 students was chosen to study in the vector product lesson by the regular teaching method. Group two (the experimental group) with 30 students was chosen to learn the vector product lesson with three dimensions vector card. 4) Analyzed data between the controlled group and the experimental group, the result showed that experimental group got higher achievement test than the controlled group significant at .01 level.

  16. Interference in Exclusive Vector Meson Production in Heavy-Ion Collisions

    International Nuclear Information System (INIS)

    Klein, Spencer R.; Nystrand, Joakim

    2000-01-01

    Vector mesons are produced copiously in peripheral relativistic heavy-ion collisions. Virtual photons from one ion can fluctuate into quark-antiquark pairs and scatter from the second ion, emerging as vector mesons. The emitter and target are indistinguishable, so emission from the two ions will interfere. Vector mesons have negative parity so the interference is destructive, reducing the production of mesons with small transverse momentum. The mesons are short lived, and decay before emission from the two ions can overlap. However, the decay-product wave functions overlap and interfere since they are produced in an entangled state, providing an example of the Einstein-Podolsky-Rosen paradox. (c) 2000 The American Physical Society

  17. Color normalization of histology slides using graph regularized sparse NMF

    Science.gov (United States)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The

  18. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  19. Distributed CPU multi-core implementation of SIRT with vectorized matrix kernel for micro-CT

    Energy Technology Data Exchange (ETDEWEB)

    Gregor, Jens [Tennessee Univ., Knoxville, TN (United States)

    2011-07-01

    We describe an implementation of SIRT for execution using a cluster of multi-core PCs. Algorithmic techniques are presented for reducing the size and computational cost of a reconstruction including near-optimal relaxation, scalar preconditioning, orthogonalized ordered subsets, and data-driven focus of attention. Implementation wise, a scheme is outlined which provides each core mutex-free access to its local shared memory while also balancing the workload across the cluster, and the system matrix is computed on-the-fly using vectorized code. Experimental results show the efficacy of the approach. (orig.)

  20. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  1. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  2. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    Science.gov (United States)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  3. Sparse Representation Denoising for Radar High Resolution Range Profiling

    Directory of Open Access Journals (Sweden)

    Min Li

    2014-01-01

    Full Text Available Radar high resolution range profile has attracted considerable attention in radar automatic target recognition. In practice, radar return is usually contaminated by noise, which results in profile distortion and recognition performance degradation. To deal with this problem, in this paper, a novel denoising method based on sparse representation is proposed to remove the Gaussian white additive noise. The return is sparsely described in the Fourier redundant dictionary and the denoising problem is described as a sparse representation model. Noise level of the return, which is crucial to the denoising performance but often unknown, is estimated by performing subspace method on the sliding subsequence correlation matrix. Sliding window process enables noise level estimation using only one observation sequence, not only guaranteeing estimation efficiency but also avoiding the influence of profile time-shift sensitivity. Experimental results show that the proposed method can effectively improve the signal-to-noise ratio of the return, leading to a high-quality profile.

  4. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    Science.gov (United States)

    Habiby, Sarry F.; Collins, Stuart A., Jr.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  5. Learning Latent Vector Spaces for Product Search

    NARCIS (Netherlands)

    Van Gysel, C.; de Rijke, M.; Kanoulas, E.

    2016-01-01

    We introduce a novel latent vector space model that jointly learns the latent representations of words, e-commerce products and a mapping between the two without the need for explicit annotations. The power of the model lies in its ability to directly model the discriminative relation between

  6. Computing Coherence Vectors and Correlation Matrices with Application to Quantum Discord Quantification

    Directory of Open Access Journals (Sweden)

    Jonas Maziero

    2016-01-01

    Full Text Available Coherence vectors and correlation matrices are important functions frequently used in physics. The numerical calculation of these functions directly from their definitions, which involves Kronecker products and matrix multiplications, may seem to be a reasonable option. Notwithstanding, as we demonstrate in this paper, some algebraic manipulations before programming can reduce considerably their computational complexity. Besides, we provide Fortran code to generate generalized Gell-Mann matrices and to compute the optimized and unoptimized versions of associated Bloch’s vectors and correlation matrix in the case of bipartite quantum systems. As a code test and application example, we consider the calculation of Hilbert-Schmidt quantum discords.

  7. PRODUCT PORTFOLIO ANALYSIS - ARTHUR D. LITTLE MATRIX

    Directory of Open Access Journals (Sweden)

    Curmei Catalin Valeriu

    2011-07-01

    Full Text Available In recent decades we have witnessed an unseen dynamism among companies, which is explained by their desire to engage in more activities that provide a high level of development and diversification. Thus, as companies are diversifying more and more, their managers confront a number of challenges arising from the management of resources for the product portfolio and the low level of resources with which companies can identify, at a time. Responding to these challenges, over time were developed a series of analytical product portfolio methods through which managers can balance the sources of cash flows from the multiple products and also can identify the place and role of products, in strategic terms, within the product portfolio. In order to identify these methods the authors of the present paper have conducted a desk research in order to analyze the strategic marketing and management literature of the last 2 decades. Widely were studied a series of methods that are presented in the marketing and management literature as the main instruments used within the product portfolio strategic planning process. Among these methods we focused on the Arthur D. Little matrix. Thus the present paper has the purpose to outline the characteristics and strategic implications of the ADL matrix within a company’s product portfolio. After conducting this analysis we have found that restricting the product portfolio analysis to the A.D.L. matrix is not a very wise decision. The A.D.L. matrix among with other marketing tools of product portfolio analysis have some advantages and disadvantages and is trying to provide, at a time, a specific diagnosis of a company’s product portfolio. Therefore, the recommendation for the Romanian managers consists in a combined use of a wide range of tools and techniques for product portfolio analysis. This leads to a better understanding of the whole mix of product markets, included in portfolio analysis, the strategic position

  8. A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Yubao Sun

    2015-01-01

    Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.

  9. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.; Michigan Univ., Ann Arbor, MI

    1991-01-01

    In order to use efficiently the new features of supercomputers, production codes, usually written 10 -20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedup in the execution times was obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize the parallel/vector architecture of a multiprocessor IBM 3090. (author)

  10. Vectorization and parallelization of a production reactor assembly code

    International Nuclear Information System (INIS)

    Vujic, J.L.; Martin, W.R.

    1991-01-01

    In order to efficiently use new features of supercomputers, production codes, usually written 10 - 20 years ago, must be tailored for modern computer architectures. We have chosen to optimize the CPM-2 code, a production reactor assembly code based on the collision probability transport method. Substantial speedups in the execution times were obtained with the parallel/vector version of the CPM-2 code. In addition, we have developed a new transfer probability method, which removes some of the modelling limitations of the collision probability method encoded in the CPM-2 code, and can fully utilize parallel/vector architecture of a multiprocessor IBM 3090. (author)

  11. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim; Sangalli, Giancarlo; Tamellini, Lorenzo

    2017-01-01

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  12. A sparse version of IGA solvers

    KAUST Repository

    Beck, Joakim

    2017-07-30

    Isogeometric Analysis (IGA) typically adopts tensor-product splines and NURBS as a basis for the approximation of the solution of PDEs. In this work, we investigate to which extent IGA solvers can benefit from the so-called sparse-grids construction in its combination technique form, which was first introduced in the early 90s in the context of the approximation of high-dimensional PDEs. The tests that we report show that, in accordance to the literature, a sparse grids construction can indeed be useful if the solution of the PDE at hand is sufficiently smooth. Sparse grids can also be useful in the case of non-smooth solutions when some a-priori knowledge on the location of the singularities of the solution can be exploited to devise suitable non-equispaced meshes. Finally, we remark that sparse grids can be seen as a simple way to parallelize pre-existing serial IGA solvers in a straightforward fashion, which can be beneficial in many practical situations.

  13. Classifying FM Value Positioning by Using a Product-Process Matrix

    DEFF Research Database (Denmark)

    Katchamart, Akarapong

    with the type of facilities processes between FM organizations with their clients. Approach (Theory/Methodology): The paper develops the facilities product - process matrix to allow comparisons of different facilities products with facilities processes and illustrate its degree of value delivering. The building......, characterized by levels of information, knowledge and innovation sharing, and mutual involvement, defines four facilities process types. Positions on the matrix capture the product-process interrelationships in facilities management. Practical Implications: The paper presents propositions of relating...... blocks of matrix are a facilities product structure and a facilities process structure. Results: A facilities product structure, characterized by degrees of facilities product customization, complexity, contingencies involved, defines four facilities product categories. A facilities process structure...

  14. Efficient Vector-Based Forwarding for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Xie

    2010-01-01

    Full Text Available Underwater Sensor Networks (UWSNs are significantly different from terrestrial sensor networks in the following aspects: low bandwidth, high latency, node mobility, high error probability, and 3-dimensional space. These new features bring many challenges to the network protocol design of UWSNs. In this paper, we tackle one fundamental problem in UWSNs: robust, scalable, and energy efficient routing. We propose vector-based forwarding (VBF, a geographic routing protocol. In VBF, the forwarding path is guided by a vector from the source to the target, no state information is required on the sensor nodes, and only a small fraction of the nodes is involved in routing. To improve the robustness, packets are forwarded in redundant and interleaved paths. Further, a localized and distributed self-adaptation algorithm allows the nodes to reduce energy consumption by discarding redundant packets. VBF performs well in dense networks. For sparse networks, we propose a hop-by-hop vector-based forwarding (HH-VBF protocol, which adapts the vector-based approach at every hop. We evaluate the performance of VBF and HH-VBF through extensive simulations. The simulation results show that VBF achieves high packet delivery ratio and energy efficiency in dense networks and HH-VBF has high packet delivery ratio even in sparse networks.

  15. Performance Improvement of Sensorless Vector Control for Induction Motor Drives Fed by Matrix Converter Using Nonlinear Model and Disturbance Observer

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2004-01-01

    This paper presents a new sensorless vector control system for high performance induction motor drives fed by a matrix converter with a non-linearity compensation and disturbance observer. The nonlinear voltage distortion that is caused by communication delay and on-state voltage drop in switching...

  16. Global Convergence of Schubert’s Method for Solving Sparse Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Huiping Cao

    2014-01-01

    Full Text Available Schubert’s method is an extension of Broyden’s method for solving sparse nonlinear equations, which can preserve the zero-nonzero structure defined by the sparse Jacobian matrix and can retain many good properties of Broyden’s method. In particular, Schubert’s method has been proved to be locally and q-superlinearly convergent. In this paper, we globalize Schubert’s method by using a nonmonotone line search. Under appropriate conditions, we show that the proposed algorithm converges globally and superlinearly. Some preliminary numerical experiments are presented, which demonstrate that our algorithm is effective for large-scale problems.

  17. Sparse Channel Estimation Including the Impact of the Transceiver Filters with Application to OFDM

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Pedersen, Niels Lovmand; Manchón, Carles Navarro

    2014-01-01

    Traditionally, the dictionary matrices used in sparse wireless channel estimation have been based on the discrete Fourier transform, following the assumption that the channel frequency response (CFR) can be approximated as a linear combination of a small number of multipath components, each one......) and receive (demodulation) filters. Hence, the assumption of the CFR being sparse in the canonical Fourier dictionary may no longer hold. In this work, we derive a signal model and subsequently a novel dictionary matrix for sparse estimation that account for the impact of transceiver filters. Numerical...... results obtained in an OFDM transmission scenario demonstrate the superior accuracy of a sparse estimator that uses our proposed dictionary rather than the classical Fourier dictionary, and its robustness against a mismatch in the assumed transmit filter characteristics....

  18. Inclusive vector meson production and hadron structure

    International Nuclear Information System (INIS)

    Boeckmann, K.

    1977-08-01

    It is shown that J/PSI production in hadronic interactions is dominated by central production from sea quarks even at beam momenta as low as 40 GeV/c. All known experimental data on inclusive vector meson production support the hypothesis that cross sections obtained from meson-nucleon and nucleon-nucleon interactions have to be compared in the quark C.M. system. With the distinction of sea quark and valence quark interactions in the additive quark model a consistent description of inclusive rho, K*, PHI and J/PSI production in hadronic interactions. A natural connection of inclusive rho 0 production cross sections in anti pp, pp and πp interactions is obtained. (orig.) [de

  19. Exclusive vector meson production in pp collisions at the COMPASS experiment

    International Nuclear Information System (INIS)

    Bernhard, Johannes

    2014-01-01

    Mechanisms for particle production in proton-proton collisions at intermediate energies are studied within the COMPASS collaboration using the COMPASS spectrometer at the M2 beam line of the SPS at CERN. The possible production mechanisms are investigated using the production of the vector mesons ω and φ and include resonant diffractive excitation of the beam proton with a subsequent decay of the resonance, central production and the related ''shake-off'' mechanism. The data which were used for this thesis were collected in the years 2008 and 2009 with a 190 GeV/c proton beam impinging on a liquid hydrogen target that was surrounded by a recoil proton detector (RPD). The RPD is an integral part of the newly developed hadron trigger system for which in addition several new detectors have been build. The performances of both RPD and hadron trigger system are scrutinised and efficiency parameters are extracted. A method for reconstruction of recoil protons and for calibration is developed and described. The production of ω mesons is studied with the reaction pp → pωp, ω → π + π - π 0 and the production of φ mesons with the reaction pp → pφp, φ → K + K - for momentum transfers squared between 0.1 (GeV/c) 2 and 1 (GeV/c) 2 . The production ratio σ(pp → pφp)/σ(pp → pωp) is determined as a function of the longitudinal momentum fraction x F and compared to the OZI rule prediction. A significant violation of the OZI rule dependent on x F is found and discussed with respect to resonant structures in the pω mass spectrum. Removing the low pω/pφ mass region which includes these structures eliminates the x F dependence. In addition, the spin density matrix element ρ 00 , i.e. the spin alignment, for both ω and φ mesons is studied. One study is performed in the helicity frame that allows to discriminate resonant diffractive excitation. In a second study, a reference frame with respect to the direction of the momentum transfer

  20. Removing flicker based on sparse color correspondences in old film restoration

    Science.gov (United States)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  1. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  2. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  3. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  4. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  5. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  6. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  7. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  8. Vectorization in quantum chemistry

    International Nuclear Information System (INIS)

    Saunders, V.R.

    1987-01-01

    It is argued that the optimal vectorization algorithm for many steps (and sub-steps) in a typical ab initio calculation of molecular electronic structure is quite strongly dependent on the target vector machine. Details such as the availability (or lack) of a given vector construct in the hardware, vector startup times and asymptotic rates must all be considered when selecting the optimal algorithm. Illustrations are drawn from: gaussian integral evaluation, fock matrix construction, 4-index transformation of molecular integrals, direct-CI methods, the matrix multiply operation. A cross comparison of practical implementations on the CDC Cyber 205, the Cray-IS and Cray-XMP machines is presented. To achieve portability while remaining optimal on a wide range of machines it is necessary to code all available algorithms in a machine independent manner, and to select the appropriate algorithm using a procedure which is based on machine dependent parameters. Most such parameters concern the timing of certain vector loop kernals, which can usually be derived from a 'bench-marking' routine executed prior to the calculation proper

  9. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  10. A mariner transposon vector adapted for mutagenesis in oral streptococci

    DEFF Research Database (Denmark)

    Nilsson, Martin; Christiansen, Natalia; Høiby, Niels

    2014-01-01

    This article describes the construction and characterization of a mariner-based transposon vector designed for use in oral streptococci, but with a potential use in other Gram-positive bacteria. The new transposon vector, termed pMN100, contains the temperature-sensitive origin of replication rep...... 5000 mutants was used in a screen to identify genes involved in the production of sucrose-dependent extracellular matrix components. Mutants with transposon inserts in genes encoding glycosyltransferases and the competence-related secretory locus were predominantly found in this screen....

  11. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  12. The application of sparse estimation of covariance matrix to quadratic discriminant analysis.

    Science.gov (United States)

    Sun, Jiehuan; Zhao, Hongyu

    2015-02-18

    Although Linear Discriminant Analysis (LDA) is commonly used for classification, it may not be directly applied in genomics studies due to the large p, small n problem in these studies. Different versions of sparse LDA have been proposed to address this significant challenge. One implicit assumption of various LDA-based methods is that the covariance matrices are the same across different classes. However, rewiring of genetic networks (therefore different covariance matrices) across different diseases has been observed in many genomics studies, which suggests that LDA and its variations may be suboptimal for disease classifications. However, it is not clear whether considering differing genetic networks across diseases can improve classification in genomics studies. We propose a sparse version of Quadratic Discriminant Analysis (SQDA) to explicitly consider the differences of the genetic networks across diseases. Both simulation and real data analysis are performed to compare the performance of SQDA with six commonly used classification methods. SQDA provides more accurate classification results than other methods for both simulated and real data. Our method should prove useful for classification in genomics studies and other research settings, where covariances differ among classes.

  13. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    Science.gov (United States)

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  14. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu; Bibi, Adel Aamer; Ghanem, Bernard

    2016-01-01

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  15. In Defense of Sparse Tracking: Circulant Sparse Tracker

    KAUST Repository

    Zhang, Tianzhu

    2016-12-13

    Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework. However, most sparse representation based trackers have high computational cost, less than promising tracking performance, and limited feature representation. To deal with the above issues, we propose a novel circulant sparse tracker (CST), which exploits circulant target templates. Because of the circulant structure property, CST has the following advantages: (1) It can refine and reduce particles using circular shifts of target templates. (2) The optimization can be efficiently solved entirely in the Fourier domain. (3) High dimensional features can be embedded into CST to significantly improve tracking performance without sacrificing much computation time. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that CST performs better than all other sparse trackers and favorably against state-of-the-art methods.

  16. Convex Banding of the Covariance Matrix.

    Science.gov (United States)

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  17. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  18. Off-diagonal helicity density matrix elements for vector mesons produced in polarized e+e- processes

    International Nuclear Information System (INIS)

    Anselmino, M.; Murgia, F.; Quintairos, P.

    1999-04-01

    Final state q q-bar interactions give origin to non zero values of the off-diagonal element ρ 1,-1 of the helicity density matrix of vector mesons produced in e + e - annihilations, as confirmed by recent OPAL data on φ, D * and K * 's. New predictions are given for ρ 1,-1 of several mesons produced at large x E and small p T - i.e. collinear with the parent jet - in the annihilation of polarized 3 + and 3 - , the results depend strongly on the elementary dynamics and allow further non trivial tests of the standard model. (author)

  19. Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor

    Directory of Open Access Journals (Sweden)

    Jonathan Sheu

    Full Text Available Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs, we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s and in 10-layer cell factories (CF10s, while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation.

  20. Partitioning sparse rectangular matrices for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, T.G.

    1998-05-01

    The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.

  1. Fast multipole preconditioners for sparse matrices arising from elliptic equations

    KAUST Repository

    Ibeid, Huda

    2017-11-09

    Among optimal hierarchical algorithms for the computational solution of elliptic problems, the fast multipole method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxable global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Here, we do not discuss the well developed applications of FMM to implement matrix-vector multiplications within Krylov solvers of boundary element methods. Instead, we propose using FMM for the volume-to-volume contribution of inhomogeneous Poisson-like problems, where the boundary integral is a small part of the overall computation. Our method may be used to precondition sparse matrices arising from finite difference/element discretizations, and can handle a broader range of scientific applications. It is capable of algebraic convergence rates down to the truncation error of the discretized PDE comparable to those of multigrid methods, and it offers potentially superior multicore and distributed memory scalability properties on commodity architecture supercomputers. Compared with other methods exploiting the low-rank character of off-diagonal blocks of the dense resolvent operator, FMM-preconditioned Krylov iteration may reduce the amount of communication because it is matrix-free and exploits the tree structure of FMM. We describe our tests in reproducible detail with freely available codes and outline directions for further extensibility.

  2. Fast multipole preconditioners for sparse matrices arising from elliptic equations

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Pestana, Jennifer; Keyes, David E.

    2017-01-01

    Among optimal hierarchical algorithms for the computational solution of elliptic problems, the fast multipole method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxable global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Here, we do not discuss the well developed applications of FMM to implement matrix-vector multiplications within Krylov solvers of boundary element methods. Instead, we propose using FMM for the volume-to-volume contribution of inhomogeneous Poisson-like problems, where the boundary integral is a small part of the overall computation. Our method may be used to precondition sparse matrices arising from finite difference/element discretizations, and can handle a broader range of scientific applications. It is capable of algebraic convergence rates down to the truncation error of the discretized PDE comparable to those of multigrid methods, and it offers potentially superior multicore and distributed memory scalability properties on commodity architecture supercomputers. Compared with other methods exploiting the low-rank character of off-diagonal blocks of the dense resolvent operator, FMM-preconditioned Krylov iteration may reduce the amount of communication because it is matrix-free and exploits the tree structure of FMM. We describe our tests in reproducible detail with freely available codes and outline directions for further extensibility.

  3. Traditional vectors as an introduction to geometric algebra

    International Nuclear Information System (INIS)

    Carroll, J E

    2003-01-01

    The 2002 Oersted Medal Lecture by David Hestenes concerns the many advantages for education in physics if geometric algebra were to replace standard vector algebra. However, such a change has difficulties for those who have been taught traditionally. A new way of introducing geometric algebra is presented here using a four-element array composed of traditional vector and scalar products. This leads to an explicit 4 x 4 matrix representation which contains key requirements for three-dimensional geometric algebra. The work can be extended to include Maxwell's equations where it is found that curl and divergence appear naturally together. However, to obtain an explicit representation of space-time algebra with the correct behaviour under Lorentz transformations, an 8 x 8 matrix representation has to be formed. This leads to a Dirac representation of Maxwell's equations showing that space-time algebra has hidden within its formalism the symmetry of 'parity, charge conjugation and time reversal'

  4. Optimal deep neural networks for sparse recovery via Laplace techniques

    OpenAIRE

    Limmer, Steffen; Stanczak, Slawomir

    2017-01-01

    This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid ca...

  5. Reducing Actinide Production Using Inert Matrix Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Deinert, Mark [Colorado School of Mines, Golden, CO (United States)

    2017-08-23

    The environmental and geopolitical problems that surround nuclear power stem largely from the longlived transuranic isotopes of Am, Cm, Np and Pu that are contained in spent nuclear fuel. New methods for transmuting these elements into more benign forms are needed. Current research efforts focus largely on the development of fast burner reactors, because it has been shown that they could dramatically reduce the accumulation of transuranics. However, despite five decades of effort, fast reactors have yet to achieve industrial viability. A critical limitation to this, and other such strategies, is that they require a type of spent fuel reprocessing that can efficiently separate all of the transuranics from the fission products with which they are mixed. Unfortunately, the technology for doing this on an industrial scale is still in development. In this project, we explore a strategy for transmutation that can be deployed using existing, current generation reactors and reprocessing systems. We show that use of an inert matrix fuel to recycle transuranics in a conventional pressurized water reactor could reduce overall production of these materials by an amount that is similar to what is achievable using proposed fast reactor cycles. Furthermore, we show that these transuranic reductions can be achieved even if the fission products are carried into the inert matrix fuel along with the transuranics, bypassing the critical separations hurdle described above. The implications of these findings are significant, because they imply that inert matrix fuel could be made directly from the material streams produced by the commercially available PUREX process. Zirconium dioxide would be an ideal choice of inert matrix in this context because it is known to form a stable solid solution with both fission products and transuranics.

  6. Implementing the conjugate gradient algorithm on multi-core systems

    NARCIS (Netherlands)

    Wiggers, W.A.; Bakker, Vincent; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria; Nurmi, J.; Takala, J.; Vainio, O.

    2007-01-01

    In linear solvers, like the conjugate gradient algorithm, sparse-matrix vector multiplication is an important kernel. Due to the sparseness of the matrices, the solver runs relatively slow. For digital optical tomography (DOT), a large set of linear equations have to be solved which currently takes

  7. Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data

    Science.gov (United States)

    Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924

  8. The application of sparse estimation of covariance matrix to quadratic discriminant analysis

    OpenAIRE

    Sun, Jiehuan; Zhao, Hongyu

    2015-01-01

    Background Although Linear Discriminant Analysis (LDA) is commonly used for classification, it may not be directly applied in genomics studies due to the large p, small n problem in these studies. Different versions of sparse LDA have been proposed to address this significant challenge. One implicit assumption of various LDA-based methods is that the covariance matrices are the same across different classes. However, rewiring of genetic networks (therefore different covariance matrices) acros...

  9. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    Science.gov (United States)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  10. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  11. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  12. Estimation of pure autoregressive vector models for revenue series ...

    African Journals Online (AJOL)

    This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...

  13. Mini-lecture course: Introduction into hierarchical matrix technique

    KAUST Repository

    Litvinenko, Alexander

    2017-12-14

    The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations (mesh points). The H-matrix technique allows us to work with general class of matrices (not only structured or Toeplits or sparse). H-matrices can keep the H-matrix data format during linear algebra operations (inverse, update, Schur complement).

  14. Covariance matrix estimation for stationary time series

    OpenAIRE

    Xiao, Han; Wu, Wei Biao

    2011-01-01

    We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...

  15. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  16. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  17. SLAP, Large Sparse Linear System Solution Package

    International Nuclear Information System (INIS)

    Greenbaum, A.

    1987-01-01

    1 - Description of program or function: SLAP is a set of routines for solving large sparse systems of linear equations. One need not store the entire matrix - only the nonzero elements and their row and column numbers. Any nonzero structure is acceptable, so the linear system solver need not be modified when the structure of the matrix changes. Auxiliary storage space is acquired and released within the routines themselves by use of the LRLTRAN POINTER statement. 2 - Method of solution: SLAP contains one direct solver, a band matrix factorization and solution routine, BAND, and several interactive solvers. The iterative routines are as follows: JACOBI, Jacobi iteration; GS, Gauss-Seidel Iteration; ILUIR, incomplete LU decomposition with iterative refinement; DSCG and ICCG, diagonal scaling and incomplete Cholesky decomposition with conjugate gradient iteration (for symmetric positive definite matrices only); DSCGN and ILUGGN, diagonal scaling and incomplete LU decomposition with conjugate gradient interaction on the normal equations; DSBCG and ILUBCG, diagonal scaling and incomplete LU decomposition with bi-conjugate gradient iteration; and DSOMN and ILUOMN, diagonal scaling and incomplete LU decomposition with ORTHOMIN iteration

  18. Quantum phase transitions in matrix product states

    International Nuclear Information System (INIS)

    Zhu Jingmin

    2008-01-01

    We present a new general and much simpler scheme to construct various quantum phase transitions (QPTs) in spin chain systems with matrix product ground states. By use of the scheme we take into account one kind of matrix product state (MPS) QPT and provide a concrete model. We also study the properties of the concrete example and show that a kind of QPT appears, accompanied by the appearance of the discontinuity of the parity absent block physical observable, diverging correlation length only for the parity absent block operator, and other properties which are that the fixed point of the transition point is an isolated intermediate-coupling fixed point of renormalization flow and the entanglement entropy of a half-infinite chain is discontinuous. (authors)

  19. Quantum Phase Transitions in Matrix Product States

    International Nuclear Information System (INIS)

    Jing-Min, Zhu

    2008-01-01

    We present a new general and much simpler scheme to construct various quantum phase transitions (QPTs) in spin chain systems with matrix product ground states. By use of the scheme we take into account one kind of matrix product state (MPS) QPT and provide a concrete model. We also study the properties of the concrete example and show that a kind of QPT appears, accompanied by the appearance of the discontinuity of the parity absent block physical observable, diverging correlation length only for the parity absent block operator, and other properties which are that the fixed point of the transition point is an isolated intermediate-coupling fixed point of renormalization flow and the entanglement entropy of a half-infinite chain is discontinuous

  20. Matrix factorizations, minimal models and Massey products

    International Nuclear Information System (INIS)

    Knapp, Johanna; Omer, Harun

    2006-01-01

    We present a method to compute the full non-linear deformations of matrix factorizations for ADE minimal models. This method is based on the calculation of higher products in the cohomology, called Massey products. The algorithm yields a polynomial ring whose vanishing relations encode the obstructions of the deformations of the D-branes characterized by these matrix factorizations. This coincides with the critical locus of the effective superpotential which can be computed by integrating these relations. Our results for the effective superpotential are in agreement with those obtained from solving the A-infinity relations. We point out a relation to the superpotentials of Kazama-Suzuki models. We will illustrate our findings by various examples, putting emphasis on the E 6 minimal model

  1. Integrating Molecular Computation and Material Production in an Artificial Subcellular Matrix

    DEFF Research Database (Denmark)

    Fellermann, Harold; Hadorn, Maik; Bönzli, Eva

    Living systems are unique in that they integrate molecular recognition and information processing with material production on the molecular scale. Pre- dominant locus of this integration is the cellular matrix, where a multitude of biochemical reactions proceed simultaneously in highly compartmen......Living systems are unique in that they integrate molecular recognition and information processing with material production on the molecular scale. Pre- dominant locus of this integration is the cellular matrix, where a multitude of biochemical reactions proceed simultaneously in highly...... compartmentalized re- action compartments that interact and get delivered through vesicle trafficking. The European Commission funded project MatchIT (Matrix for Chemical IT) aims at creating an artificial cellular matrix that seamlessly integrates infor- mation processing and material production in much the same...

  2. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  3. In vivo production of recombinant proteins using occluded recombinant AcMNPV-derived baculovirus vectors.

    Science.gov (United States)

    Guijarro-Pardo, Eva; Gómez-Sebastián, Silvia; Escribano, José M

    2017-12-01

    Trichoplusia ni insect larvae infected with vectors derived from the Autographa californica multiple nucleopolyhedrovirus (AcMNPV), are an excellent alternative to insect cells cultured in conventional bioreactors to produce recombinant proteins because productivity and cost-efficiency reasons. However, there is still a lot of work to do to reduce the manual procedures commonly required in this production platform that limit its scalability. To increase the scalability of this platform technology, a current bottleneck to be circumvented in the future is the need of injection for the inoculation of larvae with polyhedrin negative baculovirus vectors (Polh-) because of the lack of oral infectivity of these viruses, which are commonly used for production in insect cell cultures. In this work we have developed a straightforward alternative to obtain orally infective vectors derived from AcMNPV and expressing recombinant proteins that can be administered to the insect larvae (Trichoplusia ni) by feeding, formulated in the insect diet. The approach developed was based on the use of a recombinant polyhedrin protein expressed by a recombinant vector (Polh+), able to co-occlude any recombinant Polh- baculovirus vector expressing a recombinant protein. A second alternative was developed by the generation of a dual vector co-expressing the recombinant polyhedrin protein and the foreign gene of interest to obtain the occluded viruses. Additionally, by the incorporation of a reporter gene into the helper Polh+ vector, it was possible the follow-up visualization of the co-occluded viruses infection in insect larvae and will help to homogenize infection conditions. By using these methodologies, the production of recombinant proteins in per os infected larvae, without manual infection procedures, was very similar in yield to that obtained by manual injection of recombinant Polh- AcMNPV-based vectors expressing the same proteins. However, further analyses will be required for a

  4. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  5. Structural Sparse Tracking

    KAUST Repository

    Zhang, Tianzhu

    2015-06-01

    Sparse representation has been applied to visual tracking by finding the best target candidate with minimal reconstruction error by use of target templates. However, most sparse representation based trackers only consider holistic or local representations and do not make full use of the intrinsic structure among and inside target candidates, thereby making the representation less effective when similar objects appear or under occlusion. In this paper, we propose a novel Structural Sparse Tracking (SST) algorithm, which not only exploits the intrinsic relationship among target candidates and their local patches to learn their sparse representations jointly, but also preserves the spatial layout structure among the local patches inside each target candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs favorably against several state-of-the-art methods.

  6. Double vector meson production in γγ interactions at hadronic colliders

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, V.P. [Lund University, Department of Astronomy and Theoretical Physics, Lund (Sweden); Universidade Federal de Pelotas, High and Medium Energy Group, Instituto de Fisica e Matematica, Pelotas, RS (Brazil); Moreira, B.D.; Navarra, F.S. [Universidade de Sao Paulo, Instituto de Fisica, C.P. 66318, Sao Paulo, SP (Brazil)

    2016-03-15

    In this paper we revisit the double vector meson production in γγ interactions at heavy ion collisions and present, by the first time, predictions for the ρρ and J/ΨJ/Ψ production in proton.nucleus and proton.proton collisions. In order to obtain realistic predictions for rapidity distributions and total cross sections for the double vector production in ultra peripheral hadronic collisions we take into account the description of γγ → VV cross section at lowenergies as well as its behavior at large energies, associated to the gluonic interaction between the color dipoles. Our results demonstrate that the double ρ production is dominated by the low energy behavior of the γγ → VV cross section. In contrast, for the double J/Ψ production, the contribution associated to the description of the QCD dynamics at high energies contributes significantly, mainly in pp collisions. Predictions for the RHIC, LHC, FCC, and CEPC-SPPC energies are shown. (orig.)

  7. Matrix product solution of an inhomogeneous multi-species TASEP

    Science.gov (United States)

    Arita, Chikashi; Mallick, Kirone

    2013-03-01

    We study a multi-species exclusion process with inhomogeneous hopping rates and find a matrix product representation for the stationary state of this model. The matrices belong to the tensor algebra of the fundamental quadratic algebra associated with the exclusion process. We show that our matrix product representation is equivalent to a graphical construction proposed by Ayyer and Linusson (2012 arXiv:1206.0316), which generalizes an earlier probabilistic construction due to Ferrari and Martin (2007 Ann. Prob. 35 807).

  8. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  9. Light vector meson production at the LHC with the ALICE detector

    CERN Document Server

    Incani, Elisa

    2013-01-01

    The measurement of light vector meson production (\\rho, \\omega, \\phi) in pp collisions provides insight into soft Quantum Chromodynamics (QCD) processes in the LHC energy range. Calculations in this regime are based on QCD inspired phenomenological models that must be tuned to the data. Moreover, light vector meson production provides a reference for high-energy heavy-ion collisions. A measurement of the \\phi and \\omega differential cross sections as performed by the ALICE experiment in pp collisions at 7 TeV and of the \\phi cross section in pp collisions at 2.76 TeV through their decay to muon pairs and in the rapidity interval 2.5 < y < 4.

  10. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  11. On affine non-negative matrix factorization

    DEFF Research Database (Denmark)

    Laurberg, Hans; Hansen, Lars Kai

    2007-01-01

    We generalize the non-negative matrix factorization (NMF) generative model to incorporate an explicit offset. Multiplicative estimation algorithms are provided for the resulting sparse affine NMF model. We show that the affine model has improved uniqueness properties and leads to more accurate id...

  12. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  13. EMMA: An Extensible Mammalian Modular Assembly Toolkit for the Rapid Design and Production of Diverse Expression Vectors.

    Science.gov (United States)

    Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi

    2017-07-21

    Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.

  14. Multi-information fusion sparse coding with preserving local structure for hyperspectral image classification

    Science.gov (United States)

    Wei, Xiaohui; Zhu, Wen; Liao, Bo; Gu, Changlong; Li, Weibiao

    2017-10-01

    The key question of sparse coding (SC) is how to exploit the information that already exists to acquire the robust sparse representations (SRs) of distinguishing different objects for hyperspectral image (HSI) classification. We propose a multi-information fusion SC framework, which fuses the spectral, spatial, and label information in the same level, to solve the above question. In particular, pixels from disjointed spatial clusters, which are obtained by cutting the given HSI in space, are individually and sparsely encoded. Then, due to the importance of spatial structure, graph- and hypergraph-based regularizers are enforced to motivate the obtained representations smoothness and to preserve the local consistency for each spatial cluster. The latter simultaneously considers the spectrum, spatial, and label information of multiple pixels that have a great probability with the same label. Finally, a linear support vector machine is selected as the final classifier with the learned SRs as input. Experiments conducted on three frequently used real HSIs show that our methods can achieve satisfactory results compared with other state-of-the-art methods.

  15. Matrix transformation relation for the radial integrals of lepton scattering processes

    International Nuclear Information System (INIS)

    Sud, K.K.; Soto Vargas, C.W.; Sharma, D.K.

    1988-01-01

    The radial integrals of many physical problems involving products of initial- and final-state wave functions and the Coulomb interaction are expressible in terms of special cases of generalized hypergeometric functions. In the present work, the generalized hypergeometric functions become elements of a gamma vector which, by means of a partial differential equation and a matrix transformation relation, can be used in calculating the gamma vector in physical regions where the hypergeometric functions are nonconvergent or very slowly converging. Our matrix transformation relation contains the special cases of Gauss' hypergeometric functions 2 F 1 , Appell's hypergeometric functions F 2 , and Lauricella's functions L F transformation relations. The use of contiguous relations along with the transformation relations presented in this paper will facilitate the calculation of physical processes involving such radial integrals

  16. Mini-lecture course: Introduction into hierarchical matrix technique

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    allows us to work with general class of matrices (not only structured or Toeplits or sparse). H-matrices can keep the H-matrix data format during linear algebra operations (inverse, update, Schur complement).

  17. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  18. PARTRACK - A particle tracking algorithm for transport and dispersion of solutes in a sparsely fractured rock

    International Nuclear Information System (INIS)

    Svensson, Urban

    2001-04-01

    A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock

  19. MATLAB matrix algebra

    CERN Document Server

    Pérez López, César

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...

  20. ℓ0 -based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation

    Science.gov (United States)

    Xu, Xia; Shi, Zhenwei; Pan, Bin

    2018-07-01

    Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.

  1. Mutation rules and the evolution of sparseness and modularity in biological systems.

    Directory of Open Access Journals (Sweden)

    Tamar Friedlander

    Full Text Available Biological systems exhibit two structural features on many levels of organization: sparseness, in which only a small fraction of possible interactions between components actually occur; and modularity--the near decomposability of the system into modules with distinct functionality. Recent work suggests that modularity can evolve in a variety of circumstances, including goals that vary in time such that they share the same subgoals (modularly varying goals, or when connections are costly. Here, we studied the origin of modularity and sparseness focusing on the nature of the mutation process, rather than on connection cost or variations in the goal. We use simulations of evolution with different mutation rules. We found that commonly used sum-rule mutations, in which interactions are mutated by adding random numbers, do not lead to modularity or sparseness except for in special situations. In contrast, product-rule mutations in which interactions are mutated by multiplying by random numbers--a better model for the effects of biological mutations--led to sparseness naturally. When the goals of evolution are modular, in the sense that specific groups of inputs affect specific groups of outputs, product-rule mutations also lead to modular structure; sum-rule mutations do not. Product-rule mutations generate sparseness and modularity because they tend to reduce interactions, and to keep small interaction terms small.

  2. A Jacobi-Davidson type method for the generalized singular value problem

    NARCIS (Netherlands)

    Hochstenbach, M.E.

    2009-01-01

    We discuss a new method for the iterative computation of some of the generalized singular values and vectors of a large sparse matrix. Our starting point is the augmented matrix formulation of the GSVD. The subspace expansion is performed by (approximately) solving a Jacobi–Davidson type correction

  3. Efficient basis formulation for 1+1 dimensional SU(2) lattice gauge theory. Spectral calculations with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2017-07-20

    We propose an explicit formulation of the physical subspace for a 1+1 dimensional SU(2) lattice gauge theory, where the gauge degrees of freedom are integrated out. Our formulation is completely general, and might be potentially suited for the design of future quantum simulators. Additionally, it allows for addressing the theory numerically with matrix product states. We apply this technique to explore the spectral properties of the model and the effect of truncating the gauge degrees of freedom to a small finite dimension. In particular, we determine the scaling exponents for the vector mass. Furthermore, we also compute the entanglement entropy in the ground state and study its scaling towards the continuum limit.

  4. Efficient Basis Formulation for (1+1-Dimensional SU(2 Lattice Gauge Theory: Spectral Calculations with Matrix Product States

    Directory of Open Access Journals (Sweden)

    Mari Carmen Bañuls

    2017-11-01

    Full Text Available We propose an explicit formulation of the physical subspace for a (1+1-dimensional SU(2 lattice gauge theory, where the gauge degrees of freedom are integrated out. Our formulation is completely general, and might be potentially suited for the design of future quantum simulators. Additionally, it allows for addressing the theory numerically with matrix product states. We apply this technique to explore the spectral properties of the model and the effect of truncating the gauge degrees of freedom to a small finite dimension. In particular, we determine the scaling exponents for the vector mass. Furthermore, we also compute the entanglement entropy in the ground state and study its scaling towards the continuum limit.

  5. Efficient Basis Formulation for (1 +1 )-Dimensional SU(2) Lattice Gauge Theory: Spectral Calculations with Matrix Product States

    Science.gov (United States)

    Bañuls, Mari Carmen; Cichy, Krzysztof; Cirac, J. Ignacio; Jansen, Karl; Kühn, Stefan

    2017-10-01

    We propose an explicit formulation of the physical subspace for a (1 +1 )-dimensional SU(2) lattice gauge theory, where the gauge degrees of freedom are integrated out. Our formulation is completely general, and might be potentially suited for the design of future quantum simulators. Additionally, it allows for addressing the theory numerically with matrix product states. We apply this technique to explore the spectral properties of the model and the effect of truncating the gauge degrees of freedom to a small finite dimension. In particular, we determine the scaling exponents for the vector mass. Furthermore, we also compute the entanglement entropy in the ground state and study its scaling towards the continuum limit.

  6. Efficient basis formulation for 1+1 dimensional SU(2) lattice gauge theory. Spectral calculations with matrix product states

    International Nuclear Information System (INIS)

    Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan; Cichy, Krzysztof; Adam Mickiewicz Univ., Poznan; Jansen, Karl

    2017-01-01

    We propose an explicit formulation of the physical subspace for a 1+1 dimensional SU(2) lattice gauge theory, where the gauge degrees of freedom are integrated out. Our formulation is completely general, and might be potentially suited for the design of future quantum simulators. Additionally, it allows for addressing the theory numerically with matrix product states. We apply this technique to explore the spectral properties of the model and the effect of truncating the gauge degrees of freedom to a small finite dimension. In particular, we determine the scaling exponents for the vector mass. Furthermore, we also compute the entanglement entropy in the ground state and study its scaling towards the continuum limit.

  7. Nutrient depletion in Bacillus subtilis biofilms triggers matrix production

    International Nuclear Information System (INIS)

    Zhang, Wenbo; Seminara, Agnese; Suaris, Melanie; Angelini, Thomas E; Brenner, Michael P; Weitz, David A

    2014-01-01

    Many types of bacteria form colonies that grow into physically robust and strongly adhesive aggregates known as biofilms. A distinguishing characteristic of bacterial biofilms is an extracellular polymeric substance (EPS) matrix that encases the cells and provides physical integrity to the colony. The EPS matrix consists of a large amount of polysaccharide, as well as protein filaments, DNA and degraded cellular materials. The genetic pathways that control the transformation of a colony into a biofilm have been widely studied, and yield a spatiotemporal heterogeneity in EPS production. Spatial gradients in metabolites parallel this heterogeneity in EPS, but nutrient concentration as an underlying physiological initiator of EPS production has not been explored. Here, we study the role of nutrient depletion in EPS production in Bacillus subtilis biofilms. By monitoring simultaneously biofilm size and matrix production, we find that EPS production increases at a critical colony thickness that depends on the initial amount of carbon sources in the medium. Through studies of individual cells in liquid culture we find that EPS production can be triggered at the single-cell level by reducing nutrient concentration. To connect the single-cell assays with conditions in the biofilm, we calculate carbon concentration with a model for the reaction and diffusion of nutrients in the biofilm. This model predicts the relationship between the initial concentration of carbon and the thickness of the colony at the point of internal nutrient deprivation. (paper)

  8. Water ice as a matrix for film production by matrix-assisted pulsed laser evaporation (MAPLE)

    International Nuclear Information System (INIS)

    Rodrigo, K; Schou, J; Toftmann, B; Pedrys, R

    2007-01-01

    We have studied water ice as a matrix for the production of PEG (polyethylene glycol) films by MAPLE at 355 nm. The deposition rate is small compared with other matrices typically used in MAPLE, but the deposition of photofragments from the matrix can be avoided. At temperatures above -50deg. C of the target holder the deposition rate increases strongly, but the evaporation pressure in the MAPLE chamber also increases drastically

  9. Water ice as a matrix for film production by matrix assisted pulsed laser evaporation (MAPLE)

    DEFF Research Database (Denmark)

    Rodrigo, Katarzyna Agnieszka; Schou, Jørgen; Christensen, Bo Toftmann

    2007-01-01

    We have studied water ice as a matrix for the production of PEG (polyethylene glycol) films by MAPLE at 355 nm. The deposition rate is small compared with other matrices typically used in MAPLE, but the deposition of photofragments from the matrix can be avoided. At temperatures above -50 degrees C...... of the target holder the deposition rate increases strongly, but the evaporation pressure in the MAPLE chamber also increases drastically....

  10. Expansion of the Variational Garrote to a Multiple Measurement Vectors Model

    DEFF Research Database (Denmark)

    Hansen, Sofie Therese; Stahlhut, Carsten; Hansen, Lars Kai

    2013-01-01

    The recovery of sparse signals in underdetermined systems is the focus of this paper. We propose an expanded version of the Variational Garrote, originally presented by Kappen (2011), which can use multiple measurement vectors (MMVs) to further improve source retrieval performance. We show its...

  11. Efficient Pseudorecursive Evaluation Schemes for Non-adaptive Sparse Grids

    KAUST Repository

    Buse, Gerrit

    2014-01-01

    In this work we propose novel algorithms for storing and evaluating sparse grid functions, operating on regular (not spatially adaptive), yet potentially dimensionally adaptive grid types. Besides regular sparse grids our approach includes truncated grids, both with and without boundary grid points. Similar to the implicit data structures proposed in Feuersänger (Dünngitterverfahren für hochdimensionale elliptische partielle Differntialgleichungen. Diploma Thesis, Institut für Numerische Simulation, Universität Bonn, 2005) and Murarasu et al. (Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming. Cambridge University Press, New York, 2011, pp. 25–34) we also define a bijective mapping from the multi-dimensional space of grid points to a contiguous index, such that the grid data can be stored in a simple array without overhead. Our approach is especially well-suited to exploit all levels of current commodity hardware, including cache-levels and vector extensions. Furthermore, this kind of data structure is extremely attractive for today’s real-time applications, as it gives direct access to the hierarchical structure of the grids, while outperforming other common sparse grid structures (hash maps, etc.) which do not match with modern compute platforms that well. For dimensionality d ≤ 10 we achieve good speedups on a 12 core Intel Westmere-EP NUMA platform compared to the results presented in Murarasu et al. (Proceedings of the International Conference on Computational Science—ICCS 2012. Procedia Computer Science, 2012). As we show, this also holds for the results obtained on Nvidia Fermi GPUs, for which we observe speedups over our own CPU implementation of up to 4.5 when dealing with moderate dimensionality. In high-dimensional settings, in the order of tens to hundreds of dimensions, our sparse grid evaluation kernels on the CPU outperform any other known implementation.

  12. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  13. Test of OZI violation in vector meson production with COMPASS

    CERN Document Server

    Bernhard, Johannes

    2011-01-01

    The COMPASS experiment at CERN SPS completed its data taking with hadron beams ($p,\\pi, K$) in the years 2008 and 2009 by collecting a large set of data using different targets (H$_{2}$, Pb, Ni, W). These data are dedicated to hadron spectroscopy, where the focus is directed to the search for exotic bound states of quarks and gluons (hybrids, glueballs). The production of such states is known to be favoured in glue-rich environments, e.g. so-called OZI-forbidden processes. The OZI rule postulates that processes with disconnected quark line diagrams are forbidden. On the one hand, the study of the degree of OZI violation in vector meson production yields the possibilty to learn more about the involved production mechanisms. On the other hand it helps to understand the nucleon's structure itself. Contrary to former experiments, the large data sample allows for detailed studies in respect to Feynman's variable $x_{F}$. We present results from the ongoing analysis on the comparison of $\\omega$ and $\\phi$ vector m...

  14. A hands-on activity for teaching product-process matrix: roadmap and application

    Directory of Open Access Journals (Sweden)

    Luciano Costa Santos

    2014-08-01

    Full Text Available The product-process matrix is a well-known framework proposed by Hayes and Wheelwright (1979 that is commonly used to identify processes types and to analyze the alignment of these processes with the products of a company. For didactic purposes, the matrix helps undergraduates beginners from Production Engineering to understand the logic of production systems, providing knowledge that will be essential for various course subjects. Considering the high level of abstraction of the concepts underlying the product-process matrix, this paper presents a way to facilitate the learning of them through the application of a hands-on activity which relies on the active learning philosophy. The proposed dynamic uses colored plastic sheets and PVC pipes as main materials, differing from the original proposal of Penlesky and Treleven (2005 . In addition to presenting an extremely simple exercise, which encourages its application in the classroom, another contribution of this paper is to define a complete roadmap for conducting the activity. This roadmap describes the assembly of fictitious products in customization and standardization scenarios for the comparison of two processes types of product-process matrix, job shop and assembly line. The activity revealed very successful after its application to two groups of Production Engineering undergraduates, confirmed with positive feedback from the students surveyed.

  15. Proceedings of the third "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'16)

    DEFF Research Database (Denmark)

    2016-01-01

    The third edition of the "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) took place in Aalborg, the 4th largest city in Denmark situated beautifully in the northern part of the country, from the 24th to 26th of August 2016. The workshop venue...... learning; Optimization for sparse modelling; Information theory, geometry and randomness; Sparsity? What's next? (Discrete-valued signals; Union of low-dimensional spaces, Cosparsity, mixed/group norm, model-based, low-complexity models, ...); Matrix/manifold sensing/processing (graph, low...

  16. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    Science.gov (United States)

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  17. Machinery vibration signal denoising based on learned dictionary and sparse representation

    International Nuclear Information System (INIS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-01-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation. (paper)

  18. Comparison of pressure transient response in intensely and sparsely fractured reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Johns, R.T.

    1989-04-01

    A comprehensive analytical model is presented to study the pressure transient behavior of a naturally fractured reservoir with a continuous matrix block size distribution. Geologically realistic probability density functions of matrix block size are used to represent reservoirs of varying fracture intensity and uniformity. Transient interporosity flow is assumed and interporosity skin is incorporated. Drawdown and interference pressure transient tests are investigated. The results show distinctions in the pressure response from intensely and sparsely fractured reservoirs in the absence of interporosity skin. Also, uniformly and nonuniformly fractured reservoirs exhibit distinct responses, irrespective of the degree of fracture intensity. The pressure response in a nonuniformly fractured reservoir with large block size variability, approaches a nonfractured (homogeneous) reservoir response. Type curves are developed to estimate matrix block size variability and the degree of fracture intensity from drawdown and interference well tests.

  19. Entanglement property in matrix product spin systems

    International Nuclear Information System (INIS)

    Zhu Jingmin

    2012-01-01

    We study the entanglement property in matrix product spin-ring systems systemically by von Neumann entropy. We find that: (i) the Hilbert space dimension of one spin determines the upper limit of the maximal value of the entanglement entropy of one spin, while for multiparticle entanglement entropy, the upper limit of the maximal value depends on the dimension of the representation matrices. Based on the theory, we can realize the maximum of the entanglement entropy of any spin block by choosing the appropriate control parameter values. (ii) When the entanglement entropy of one spin takes its maximal value, the entanglement entropy of an asymptotically large spin block, i.e. the renormalization group fixed point, is not likely to take its maximal value, and so only the entanglement entropy S n of a spin block that varies with size n can fully characterize the spin-ring entanglement feature. Finally, we give the entanglement dynamics, i.e. the Hamiltonian of the matrix product system. (author)

  20. Enhancement of snow cover change detection with sparse representation and dictionary learning

    Science.gov (United States)

    Varade, D.; Dikshit, O.

    2014-11-01

    Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.

  1. Vector boson production in joint resummation

    Energy Technology Data Exchange (ETDEWEB)

    Marzani, Simone; Theeuwes, Vincent [University at Buffalo, The State University of New York,Buffalo, NY 14260-1500 (United States)

    2017-02-24

    We study the transverse momentum (Q{sub T}) distribution of an electro-weak vector boson produced via the Drell-Yan mechanism, in the context of joint resummation. This formalism allows for the simultaneous resummation of logarithmic contributions that are enhanced at small Q{sub T} and at partonic threshold. We extend joint resummation to next-to-next-to leading logarithmic accuracy and we present resummed and matched results for three different phenomenological setups. In particular, we study the production of a Z boson at the Tevatron and at the Large Hadron Collider (LHC), as well as the production of a heavier Z{sup ′} at the LHC. We compare our findings to standard Q{sub T} resummation, as well as to fixed-order perturbation theory. We find that joint resummation provides a moderate (but not flat) correction with respect to Q{sub T} resummation and it leads to a reduction of the scale dependence of the results. However, our study also shows some limitations of this formalism. While the use of joint resummation for Z production at the Tevatron and Z{sup ′} production at the LHC appears to be justified, our implementation suffers from a stronger dependence on power corrections for processes which are further away from threshold, such as Z production at the LHC, for which we cannot claim an improvement over standard Q{sub T} resummation.

  2. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  3. Measurement of guided mode wave vectors by analysis of the transfer matrix obtained with multi-emitters and multi-receivers in contact

    Energy Technology Data Exchange (ETDEWEB)

    Minonzio, Jean-Gabriel; Talmant, Maryline; Laugier, Pascal, E-mail: jean-gabriel.minonzio@upmc.fr [UPMC Univ Paris 06, UMR 7623, LIP, 15 rue de l' ecole de medecine F-75005, Paris (France)

    2011-01-01

    Different quantitative ultrasound techniques are currently developed for clinical assessment of human bone status. This paper is dedicated to axial transmission: emitters and receivers are linearly arranged on the same side of the skeletal site, preferentially the forearm. In several clinical studies, the signal velocity of the earliest temporal event has been shown to discriminate osteoporotic patients from healthy subjects. However, a multi parameter approach might be relevant to improve bone diagnosis and this be could be achieved by accurate measurement of guided waves wave vectors. For clinical purposes and easy access to the measurement site, the length probe is limited to about 10 mm. The limited number of acquisition scan points on such a short distance reduces the efficiency of conventional signal processing techniques, such as spatio-temporal Fourier transform. The performance of time-frequency techniques was shown to be moderate in other studies. Thus, optimised signal processing is a critical point for a reliable estimate of guided mode wave vectors. Toward this end, a technique, taking benefit of using both multiple emitters and multiple receivers, is proposed. The guided mode wave vectors are obtained using a projection in the singular vectors basis. Those are determined by the singular values decomposition of the transmission matrix between the two arrays at different frequencies. This technique enables us to recover accurately guided waves wave vectors for moderately large array.

  4. Elements of Calculus Quaternionic Matrices And Some Applications In Vector Algebra And Kinematics

    Directory of Open Access Journals (Sweden)

    Pivnyak G.G.

    2016-04-01

    Full Text Available Quaternionic matrices are proposed to develop mathematical models and perform computational experiments. New formulae for complex vector and scalar products matrix notation, formulae of first curvature, second curvature and orientation of true trihedron tracing are demonstrated in this paper. Application of quaternionic matrices for a problem of airspace transport system trajectory selection is shown.

  5. Production rates of strange vector mesons at the Z0 resonance

    International Nuclear Information System (INIS)

    Dima, M.O.

    1997-05-01

    This dissertation presents a study of strange vector meson production, open-quotes leading particleclose quotes effect and a first direct measurement of the strangeness suppression parameter in hadronic decays of the neutral electroweak boson, Z 0 . The measurements were performed in e + e - collisions at the Stanford Linear Accelerator Center (SLAC) with the SLC Large Detector (SLD) experiment. A new generation particle ID system, the SLD Cerenkov Ring Imaging Detector (CRID) is used to discriminate kaons from pions, enabling the reconstruction of the vector mesons over a wide momentum range. The inclusive production rates of φ and K* 0 and the differential rates versus momentum were measured and are compared with those of other experiments and theoretical predictions. The high longitudinal polarisation of the SLC electron beam is used in conjunction with the electroweak quark production asymmetries to separate quark jets from antiquark jets. K* 0 production is studied separately in these samples, and the results show evidence for the open-quotes leading particleclose quotes effect. The difference between K* 0 production rates at high momentum in quark and antiquark jets yields a first direct measurement of strangeness suppression in jet fragmentation

  6. The Non–Symmetric s–Step Lanczos Algorithm: Derivation of Efficient Recurrences and Synchronization–Reducing Variants of BiCG and QMR

    Directory of Open Access Journals (Sweden)

    Feuerriegel Stefan

    2015-12-01

    Full Text Available The Lanczos algorithm is among the most frequently used iterative techniques for computing a few dominant eigenvalues of a large sparse non-symmetric matrix. At the same time, it serves as a building block within biconjugate gradient (BiCG and quasi-minimal residual (QMR methods for solving large sparse non-symmetric systems of linear equations. It is well known that, when implemented on distributed-memory computers with a huge number of processes, the synchronization time spent on computing dot products increasingly limits the parallel scalability. Therefore, we propose synchronization-reducing variants of the Lanczos, as well as BiCG and QMR methods, in an attempt to mitigate these negative performance effects. These so-called s-step algorithms are based on grouping dot products for joint execution and replacing time-consuming matrix operations by efficient vector recurrences. The purpose of this paper is to provide a rigorous derivation of the recurrences for the s-step Lanczos algorithm, introduce s-step BiCG and QMR variants, and compare the parallel performance of these new s-step versions with previous algorithms.

  7. Two alternate proofs of Wang's lune formula for sparse distributed memory and an integral approximation

    Science.gov (United States)

    Jaeckel, Louis A.

    1988-01-01

    In Kanerva's Sparse Distributed Memory, writing to and reading from the memory are done in relation to spheres in an n-dimensional binary vector space. Thus it is important to know how many points are in the intersection of two spheres in this space. Two proofs are given of Wang's formula for spheres of unequal radii, and an integral approximation for the intersection in this case.

  8. Mapping value added positions in facilities management by using a product-process matrix

    DEFF Research Database (Denmark)

    Katchamart, Akarapong

    2013-01-01

    Purpose – The purpose of this exploratory research paper is to present a product-process matrix that assists FM organizations and their stakeholders to map their value added position in their organizations. Using this matrix, FM practitioners are able to assess the existing value added delivering...... of the matrix are an FM product structure and an FM process structure. The supporting empirical data were collected through semi-structured interviews from selected FM organizations supplemented by relevant documents. Findings – Based on a product-process matrix, a typology of FM value added positions...... greater values to the client’s core business. Meanwhile, misaligning dilutes the value delivery. Research limitations/implications – This normative matrix can be used as a decision-making tool for a client to assess its FM performances and activities, and to determine the needs of FM provision...

  9. Triple vector boson production through Higgs-Strahlung with NLO multijet merging

    Energy Technology Data Exchange (ETDEWEB)

    Hoeche, S.; /SLAC; Krauss, F.; /Durham U., IPPP; Pozzorini, S.; /Zurich U.; Schonherr, M.; Thompson, J.M.; /Durham U., IPPP; Zapp, K.C.; /CERN

    2014-07-25

    Triple gauge boson hadroproduction, in particular the production of three W-bosons at the LHC, is considered at next-to leading order accuracy in QCD. The NLO matrix elements are combined with parton showers. Multijet merging is invoked such that NLO matrix elements with one additional jet are also included. The studies here incorporate both the signal and all relevant backgrounds for V H production with the subsequent decay of the Higgs boson into W– or τ–- pairs. They have been performed using SHERPA+OPENLOOPS in combination with COLLIER.

  10. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  11. Refractive index inversion based on Mueller matrix method

    Science.gov (United States)

    Fan, Huaxi; Wu, Wenyuan; Huang, Yanhua; Li, Zhaozhao

    2016-03-01

    Based on Stokes vector and Jones vector, the correlation between Mueller matrix elements and refractive index was studied with the result simplified, and through Mueller matrix way, the expression of refractive index inversion was deduced. The Mueller matrix elements, under different incident angle, are simulated through the expression of specular reflection so as to analyze the influence of the angle of incidence and refractive index on it, which is verified through the measure of the Mueller matrix elements of polished metal surface. Research shows that, under the condition of specular reflection, the result of Mueller matrix inversion is consistent with the experiment and can be used as an index of refraction of inversion method, and it provides a new way for target detection and recognition technology.

  12. Use of Lanczos vectors in fluid/structure interaction problems

    International Nuclear Information System (INIS)

    Jeans, R.; Mathews, I.C.

    1992-01-01

    The goals of any numerical computational technique used for the solution of structural acoustics problems in the exterior infinite domain should be of accuracy with rapid convergence, robustness, and computational efficiency. A computer program has been developed to achieve each of these three goals. Accuracy and robustness in the numerical representation of the integral equations used to represent the infinite fluid was attained through the use of boundary element implementations of the surface Helmholtz integral equations. The computational efficiency was resolved through the use of Lanczos vectors to model the deformation characteristics of the structure. The authors have developed collocation and variational techniques to overcome the difficulties previously encountered in the numerical implementation of the hypersingular integral operator. The Cauchy singularity present in the integral formulation is made numerically amenable through the use of tangential derivatives in both the collocation and variational techniques. The variational approach has the advantage that the resulting added fluid mass term is symmetric and combines efficiently with a finite element approximation of the structural elastic response. Several different strategies making use of the Lanczos vectors have been investigated. The first involved the use of Lanczos vectors solely to characterize the structural response. This reduced form of the structural dynamical matrix was then substituted back into a Burton and Miller formulation of the acoustic problem. The second strategy investigated involved forming the complex Lanzcos vectors of the dynamical matrix formed from the addition of a symmetrical added fluid matrix to the structural mass matrix. The size of resultant matrix equation set solved at each frequency for this strategy is determined by the number of Lanczos vectors used. 19 refs., 10 figs., 2 tabs

  13. One-point functions in AdS/dCFT from matrix product states

    International Nuclear Information System (INIS)

    Buhl-Mortensen, Isak; Leeuw, Marius de; Kristjansen, Charlotte; Zarembo, Konstantin

    2016-01-01

    One-point functions of certain non-protected scalar operators in the defect CFT dual to the D3-D5 probe brane system with k units of world volume flux can be expressed as overlaps between Bethe eigenstates of the Heisenberg spin chain and a matrix product state. We present a closed expression of determinant form for these one-point functions, valid for any value of k. The determinant formula factorizes into the k=2 result times a k-dependent pre-factor. Making use of the transfer matrix of the Heisenberg spin chain we recursively relate the matrix product state for higher even and odd k to the matrix product state for k=2 and k=3 respectively. We furthermore find evidence that the matrix product states for k=2 and k=3 are related via a ratio of Baxter’s Q-operators. The general k formula has an interesting thermodynamical limit involving a non-trivial scaling of k, which indicates that the match between string and field theory one-point functions found for chiral primaries might be tested for non-protected operators as well. We revisit the string computation for chiral primaries and discuss how it can be extended to non-protected operators.

  14. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  15. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  16. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  17. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig

    2015-09-18

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  18. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.; Ahmed, Syed

    2015-01-01

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  19. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  20. Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism

    OpenAIRE

    Arias-Castro, Ery; Candès, Emmanuel J.; Plan, Yaniv

    2011-01-01

    Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose we have $p$ covariates and that under the alternative, the response only depends upon the order of $p^{1-\\alpha}$ of those, $0\\le\\alpha\\le1$...

  1. Vector meson production in the dimuon channel in the ALICE experiment at the LHC

    CERN Document Server

    Massacrier, L.

    2011-01-01

    The purpose of the ALICE experiment at the LHC is the study of the Quark Gluon Plasma (QGP) formed in ultra-relativistic heavy-ion collisions, a state of matter in which quarks and gluons are deconfined. The properties of this state of strongly-interacting matter can be accessed through the study of light vector mesons ($\\rho$, $\\omega$ and $\\phi$). Indeed, the strange quark content ($s\\bar{s}$) of the $\\phi$ meson makes its study interesting in connection with the strangeness enhancement observed in heavy-ion collisions. Moreover, $\\rho$ and $\\omega$ spectral function studies give information on chiral symmetry restoration. Vector meson production in pp collisions is important as a baseline for heavy-ion studies and for constraining hadronic models. We present results on light vector meson production obtained with the muon spectrometer of the ALICE experiment in pp collisions at $\\sqrt{s}$=7 TeV. Production ratios, integrated and differential cross sections for $\\phi$ and $\\omega$ are presented. Those result...

  2. Sparse modeling applied to patient identification for safety in medical physics applications

    Science.gov (United States)

    Lewkowitz, Stephanie

    Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling

  3. Fast Solution in Sparse LDA for Binary Classification

    Science.gov (United States)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic

  4. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    Science.gov (United States)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  5. Matrix product states for lattice field theories

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, H. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Tsukuba Univ., Ibaraki (Japan). Graduate School of Pure and Applied Sciences

    2013-10-15

    The term Tensor Network States (TNS) refers to a number of families of states that represent different ansaetze for the efficient description of the state of a quantum many-body system. Matrix Product States (MPS) are one particular case of TNS, and have become the most precise tool for the numerical study of one dimensional quantum many-body systems, as the basis of the Density Matrix Renormalization Group method. Lattice Gauge Theories (LGT), in their Hamiltonian version, offer a challenging scenario for these techniques. While the dimensions and sizes of the systems amenable to TNS studies are still far from those achievable by 4-dimensional LGT tools, Tensor Networks can be readily used for problems which more standard techniques, such as Markov chain Monte Carlo simulations, cannot easily tackle. Examples of such problems are the presence of a chemical potential or out-of-equilibrium dynamics. We have explored the performance of Matrix Product States in the case of the Schwinger model, as a widely used testbench for lattice techniques. Using finite-size, open boundary MPS, we are able to determine the low energy states of the model in a fully non-perturbativemanner. The precision achieved by the method allows for accurate finite size and continuum limit extrapolations of the ground state energy, but also of the chiral condensate and the mass gaps, thus showing the feasibility of these techniques for gauge theory problems.

  6. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  7. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  8. Kernel polynomial method for a nonorthogonal electronic-structure calculation of amorphous diamond

    International Nuclear Information System (INIS)

    Roeder, H.; Silver, R.N.; Drabold, D.A.; Dong, J.J.

    1997-01-01

    The Kernel polynomial method (KPM) has been successfully applied to tight-binding electronic-structure calculations as an O(N) method. Here we extend this method to nonorthogonal basis sets with a sparse overlap matrix S and a sparse Hamiltonian H. Since the KPM method utilizes matrix vector multiplications it is necessary to apply S -1 H onto a vector. The multiplication of S -1 is performed using a preconditioned conjugate-gradient method and does not involve the explicit inversion of S. Hence the method scales the same way as the original KPM method, i.e., O(N), although there is an overhead due to the additional conjugate-gradient part. We apply this method to a large scale electronic-structure calculation of amorphous diamond. copyright 1997 The American Physical Society

  9. Heading-vector navigation based on head-direction cells and path integration.

    Science.gov (United States)

    Kubie, John L; Fenton, André A

    2009-05-01

    Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal

  10. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  11. Metabolic flux profiling of MDCK cells during growth and canine adenovirus vector production.

    Science.gov (United States)

    Carinhas, Nuno; Pais, Daniel A M; Koshkin, Alexey; Fernandes, Paulo; Coroadinha, Ana S; Carrondo, Manuel J T; Alves, Paula M; Teixeira, Ana P

    2016-03-23

    Canine adenovirus vector type 2 (CAV2) represents an alternative to human adenovirus vectors for certain gene therapy applications, particularly neurodegenerative diseases. However, more efficient production processes, assisted by a greater understanding of the effect of infection on producer cells, are required. Combining [1,2-(13)C]glucose and [U-(13)C]glutamine, we apply for the first time (13)C-Metabolic flux analysis ((13)C-MFA) to study E1-transformed Madin-Darby Canine Kidney (MDCK) cells metabolism during growth and CAV2 production. MDCK cells displayed a marked glycolytic and ammoniagenic metabolism, and (13)C data revealed a large fraction of glutamine-derived labelling in TCA cycle intermediates, emphasizing the role of glutamine anaplerosis. (13)C-MFA demonstrated the importance of pyruvate cycling in balancing glycolytic and TCA cycle activities, as well as occurrence of reductive alphaketoglutarate (AKG) carboxylation. By turn, CAV2 infection significantly upregulated fluxes through most central metabolism, including glycolysis, pentose-phosphate pathway, glutamine anaplerosis and, more prominently, reductive AKG carboxylation and cytosolic acetyl-coenzyme A formation, suggestive of increased lipogenesis. Based on these results, we suggest culture supplementation strategies to stimulate nucleic acid and lipid biosynthesis for improved canine adenoviral vector production.

  12. Crossed products by endomorphisms, vector bundles and group duality, II

    OpenAIRE

    Vasselli, Ezio

    2004-01-01

    We study C*-algebra endomorphims which are special in a weaker sense w.r.t. the notion introduced by Doplicher and Roberts. We assign to such endomorphisms a geometrical invariant, representing a cohomological obstruction for them to be special in the usual sense. Moreover, we construct the crossed product of a C*-algebra by the action of the dual of a (nonabelian, noncompact) group of vector bundle automorphisms. These crossed products supply a class of examples for such generalized special ...

  13. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.; Grimm, Volker; Hochbruck, Marlis

    2013-01-01

    A well-known problem in computing some matrix functions iteratively is the lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Suppose the matrix exponential of a given matrix times a given vector has to be computed.

  14. Residual, restarting and Richardson iteration for the matrix exponential

    NARCIS (Netherlands)

    Bochev, Mikhail A.

    2010-01-01

    A well-known problem in computing some matrix functions iteratively is a lack of a clear, commonly accepted residual notion. An important matrix function for which this is the case is the matrix exponential. Assume, the matrix exponential of a given matrix times a given vector has to be computed. We

  15. Completing sparse and disconnected protein-protein network by deep learning.

    Science.gov (United States)

    Huang, Lei; Liao, Li; Wu, Cathy H

    2018-03-22

    Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network

  16. On the Vectorization of FIR Filterbanks

    Directory of Open Access Journals (Sweden)

    Barbedo Jayme Garcia Arnal

    2007-01-01

    Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and , in order to explore different aspects of the proposed technique.

  17. On the Vectorization of FIR Filterbanks

    Directory of Open Access Journals (Sweden)

    Amauri Lopes

    2007-01-01

    Full Text Available This paper presents a vectorization technique to implement FIR filterbanks. The word vectorization, in the context of this work, refers to a strategy in which all iterative operations are replaced by equivalent vector and matrix operations. This approach allows that the increasing parallelism of the most recent computer processors and systems be properly explored. The vectorization techniques are applied to two kinds of FIR filterbanks (conventional and recursi ve, and are presented in such a way that they can be easily extended to any kind of FIR filterbanks. The vectorization approach is compared to other kinds of implementation that do not explore the parallelism, and also to a previous FIR filter vectorization approach. The tests were performed in Matlab and C, in order to explore different aspects of the proposed technique.

  18. Simplified production and concentration of HIV-1-based lentiviral vectors using HYPERFlask vessels and anion exchange membrane chromatography

    Science.gov (United States)

    Kutner, Robert H; Puthli, Sharon; Marino, Michael P; Reiser, Jakob

    2009-01-01

    Background During the past twelve years, lentiviral (LV) vectors have emerged as valuable tools for transgene delivery because of their ability to transduce nondividing cells and their capacity to sustain long-term transgene expression in target cells in vitro and in vivo. However, despite significant progress, the production and concentration of high-titer, high-quality LV vector stocks is still cumbersome and costly. Methods Here we present a simplified protocol for LV vector production on a laboratory scale using HYPERFlask vessels. HYPERFlask vessels are high-yield, high-performance flasks that utilize a multilayered gas permeable growth surface for efficient gas exchange, allowing convenient production of high-titer LV vectors. For subsequent concentration of LV vector stocks produced in this way, we describe a facile protocol involving Mustang Q anion exchange membrane chromatography. Results Our results show that unconcentrated LV vector stocks with titers in excess of 108 transduction units (TU) per ml were obtained using HYPERFlasks and that these titers were higher than those produced in parallel using regular 150-cm2 tissue culture dishes. We also show that up to 500 ml of an unconcentrated LV vector stock prepared using a HYPERFlask vessel could be concentrated using a single Mustang Q Acrodisc with a membrane volume of 0.18 ml. Up to 5.3 × 1010 TU were recovered from a single HYPERFlask vessel. Conclusion The protocol described here is easy to implement and should facilitate high-titer LV vector production for preclinical studies in animal models without the need for multiple tissue culture dishes and ultracentrifugation-based concentration protocols. PMID:19220915

  19. Facial expression recognition based on weber local descriptor and sparse representation

    Science.gov (United States)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  20. Applications of the conserved vector current theory and the partially conserved axial-vector current theory to nuclear beta-decays

    International Nuclear Information System (INIS)

    Tint, M.

    The contribution of the mesonic exchange effect to the conserved vector current in the first forbidden β-decay of Ra E is estimated under the headings: (1) The conserved vector current. (2) The CVC theory and the first forbidden β-decays. (3) Shell model calculations of some matrix-elements. (4) Direct calculation of the exchange term. Considering the mesonic exchange effect in the axial vector-current of β-decay the partially conserved axial vector current theory and experimental results of the process p + p → d + π + are examined. (U.K.)

  1. Optimal Sparse Matrix Dense Vector Multiplication in the I/O-Model

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2010-01-01

    of nonzero entries is kN, i.e., where the average number of nonzero entries per column is k. We investigate what is the external worst-case complexity, i.e., the best possible upper bound on the number of I/Os, as a function of k and N. We determine this complexity up to a constant factor for all meaningful...

  2. Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling

    DEFF Research Database (Denmark)

    Habib, Tufail

    2014-01-01

    Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...

  3. Parton-shower matching systematics in vector-boson-fusion WW production

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, Michael [Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Plaetzer, Simon [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-05-15

    We perform a detailed analysis of next-to-leading order plus parton-shower matching in vector-boson-fusion WW production including leptonic decays. The study is performed in the Herwig 7 framework interfaced to VBFNLO 3, using the angular-ordered and dipole-based parton-shower algorithms combined with the subtractive and multiplicative-matching algorithms. (orig.)

  4. Current status of Plasmodium knowlesi vectors: a public health concern?

    Science.gov (United States)

    Vythilingam, I; Wong, M L; Wan-Yussof, W S

    2018-01-01

    Plasmodium knowlesi a simian malaria parasite is currently affecting humans in Southeast Asia. Malaysia has reported the most number of cases and P. knowlesi is the predominant species occurring in humans. The vectors of P. knowlesi belong to the Leucosphyrus group of Anopheles mosquitoes. These are generally described as forest-dwelling mosquitoes. With deforestation and changes in land-use, some species have become predominant in farms and villages. However, knowledge on the distribution of these vectors in the country is sparse. From a public health point of view it is important to know the vectors, so that risk factors towards knowlesi malaria can be identified and control measures instituted where possible. Here, we review what is known about the knowlesi malaria vectors and ascertain the gaps in knowledge, so that future studies could concentrate on this paucity of data in-order to address this zoonotic problem.

  5. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin

    2014-01-01

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse

  6. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  7. Symbolic computer vector analysis

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    A MACSYMA program is described which performs symbolic vector algebra and vector calculus. The program can combine and simplify symbolic expressions including dot products and cross products, together with the gradient, divergence, curl, and Laplacian operators. The distribution of these operators over sums or products is under user control, as are various other expansions, including expansion into components in any specific orthogonal coordinate system. There is also a capability for deriving the scalar or vector potential of a vector field. Examples include derivation of the partial differential equations describing fluid flow and magnetohydrodynamics, for 12 different classic orthogonal curvilinear coordinate systems.

  8. On the inclusive reaction e+e- → VX with regard for polarization states of generated vector meson

    International Nuclear Information System (INIS)

    Khachtryan, G.N.; Shakhnazaryan, Yu.G.

    1977-01-01

    The e + e - →VX inclusive process has been considered with allowance made for polarization states of a vector meson. The tensor that describes the vortex of the γ→VX transition has also been considered. In the general case the tensor contains eight structural functions. The elements of the vector meson density matrix have been calculated in the spiral representation. These elements are expressed in terms of the given structural functions and polarization vectors of annihilating particles. It is shown that the structural functions can be determined from the study of angular distribution of products of the meson vector decay on pseudoscalar particles (p→2π, ω→3π, phi→2K) and on a lepton-antilepton pair (PSI, PSI'→e + e - )

  9. Turbulent flows over sparse canopies

    Science.gov (United States)

    Sharma, Akshath; García-Mayoral, Ricardo

    2018-04-01

    Turbulent flows over sparse and dense canopies exerting a similar drag force on the flow are investigated using Direct Numerical Simulations. The dense canopies are modelled using a homogeneous drag force, while for the sparse canopy, the geometry of the canopy elements is represented. It is found that on using the friction velocity based on the local shear at each height, the streamwise velocity fluctuations and the Reynolds stress within the sparse canopy are similar to those from a comparable smooth-wall case. In addition, when scaled with the local friction velocity, the intensity of the off-wall peak in the streamwise vorticity for sparse canopies also recovers a value similar to a smooth-wall. This indicates that the sparse canopy does not significantly disturb the near-wall turbulence cycle, but causes its rescaling to an intensity consistent with a lower friction velocity within the canopy. In comparison, the dense canopy is found to have a higher damping effect on the turbulent fluctuations. For the case of the sparse canopy, a peak in the spectral energy density of the wall-normal velocity, and Reynolds stress is observed, which may indicate the formation of Kelvin-Helmholtz-like instabilities. It is also found that a sparse canopy is better modelled by a homogeneous drag applied on the mean flow alone, and not the turbulent fluctuations.

  10. Electroproduction and photoproduction of vector mesons and generalized vector meson dominance

    International Nuclear Information System (INIS)

    Fraas, H.; Kuroda, M.

    1977-05-01

    Using generalized vector meson dominance, electro- and photoproduction of vector mesons is studied. The unnatural parity exchange part of ω(1.2) production is estimated to be about one fourth of that of ω-production. The off diagonal transition model suggests the suppression of diffractive rho(1.2) and ω(1.2) production. (orig.) [de

  11. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  12. Matrix algebra theory, computations and applications in statistics

    CERN Document Server

    Gentle, James E

    2017-01-01

    This textbook for graduate and advanced undergraduate students presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra. Matrix algebra is one of the most important areas of mathematics in data science and in statistical theory, and the second edition of this very popular textbook provides essential updates and comprehensive coverage on critical topics in mathematics in data science and in statistical theory. Part I offers a self-contained description of relevant aspects of the theory of matrix algebra for applications in statistics. It begins with fundamental concepts of vectors and vector spaces; covers basic algebraic properties of matrices and analytic properties of vectors and matrices in multivariate calculus; and concludes with a discussion on operations on matrices in solutions of linear systems and in eigenanalysis. Part II considers various types of matrices encountered in statistics, such as...

  13. Sparse random matrices: The eigenvalue spectrum revisited

    International Nuclear Information System (INIS)

    Semerjian, Guilhem; Cugliandolo, Leticia F.

    2003-08-01

    We revisit the derivation of the density of states of sparse random matrices. We derive a recursion relation that allows one to compute the spectrum of the matrix of incidence for finite trees that determines completely the low concentration limit. Using the iterative scheme introduced by Biroli and Monasson [J. Phys. A 32, L255 (1999)] we find an approximate expression for the density of states expected to hold exactly in the opposite limit of large but finite concentration. The combination of the two methods yields a very simple geometric interpretation of the tails of the spectrum. We test the analytic results with numerical simulations and we suggest an indirect numerical method to explore the tails of the spectrum. (author)

  14. Compressive sensing using optimized sensing matrix for face verification

    Science.gov (United States)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  15. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  16. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  17. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  18. Tensor-GMRES method for large sparse systems of nonlinear equations

    Science.gov (United States)

    Feng, Dan; Pulliam, Thomas H.

    1994-01-01

    This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.

  19. Sparse distributed memory overview

    Science.gov (United States)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  20. Delivery of viral vectors to tumor cells: extracellular transport, systemic distribution, and strategies for improvement.

    Science.gov (United States)

    Wang, Yong; Yuan, Fan

    2006-01-01

    It is a challenge to deliver therapeutic genes to tumor cells using viral vectors because (i) the size of these vectors are close to or larger than the space between fibers in extracellular matrix and (ii) viral proteins are potentially toxic in normal tissues. In general, gene delivery is hindered by various physiological barriers to virus transport from the site of injection to the nucleus of tumor cells and is limited by normal tissue tolerance of toxicity determined by local concentrations of transgene products and viral proteins. To illustrate the obstacles encountered in the delivery and yet limit the scope of discussion, this review focuses only on extracellular transport in solid tumors and distribution of viral vectors in normal organs after they are injected intravenously or intratumorally. This review also discusses current strategies for improving intratumoral transport and specificity of viral vectors.

  1. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Modeling and Simulation of Matrix Converter

    DEFF Research Database (Denmark)

    Liu, Fu-rong; Klumpner, Christian; Blaabjerg, Frede

    2005-01-01

    This paper discusses the modeling and simulation of matrix converter. Two models of matrix converter are presented: one is based on indirect space vector modulation and the other is based on power balance equation. The basis of these two models is• given and the process on modeling is introduced...

  3. q-Virasoro constraints in matrix models

    Energy Technology Data Exchange (ETDEWEB)

    Nedelin, Anton [Dipartimento di Fisica, Università di Milano-Bicocca and INFN, sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden); Zabzine, Maxim [Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden)

    2017-03-20

    The Virasoro constraints play the important role in the study of matrix models and in understanding of the relation between matrix models and CFTs. Recently the localization calculations in supersymmetric gauge theories produced new families of matrix models and we have very limited knowledge about these matrix models. We concentrate on elliptic generalization of hermitian matrix model which corresponds to calculation of partition function on S{sup 3}×S{sup 1} for vector multiplet. We derive the q-Virasoro constraints for this matrix model. We also observe some interesting algebraic properties of the q-Virasoro algebra.

  4. An Adaptive Sparse Grid Algorithm for Elliptic PDEs with Lognormal Diffusion Coefficient

    KAUST Repository

    Nobile, Fabio

    2016-03-18

    In this work we build on the classical adaptive sparse grid algorithm (T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature), obtaining an enhanced version capable of using non-nested collocation points, and supporting quadrature and interpolation on unbounded sets. We also consider several profit indicators that are suitable to drive the adaptation process. We then use such algorithm to solve an important test case in Uncertainty Quantification problem, namely the Darcy equation with lognormal permeability random field, and compare the results with those obtained with the quasi-optimal sparse grids based on profit estimates, which we have proposed in our previous works (cf. e.g. Convergence of quasi-optimal sparse grids approximation of Hilbert-valued functions: application to random elliptic PDEs). To treat the case of rough permeability fields, in which a sparse grid approach may not be suitable, we propose to use the adaptive sparse grid quadrature as a control variate in a Monte Carlo simulation. Numerical results show that the adaptive sparse grids have performances similar to those of the quasi-optimal sparse grids and are very effective in the case of smooth permeability fields. Moreover, their use as control variate in a Monte Carlo simulation allows to tackle efficiently also problems with rough coefficients, significantly improving the performances of a standard Monte Carlo scheme.

  5. Metabolic flux profiling of MDCK cells during growth and canine adenovirus vector production

    OpenAIRE

    Nuno Carinhas; Daniel A. M. Pais; Alexey Koshkin; Paulo Fernandes; Ana S. Coroadinha; Manuel J. T. Carrondo; Paula M. Alves; Ana P. Teixeira

    2016-01-01

    Canine adenovirus vector type 2 (CAV2) represents an alternative to human adenovirus vectors for certain gene therapy applications, particularly neurodegenerative diseases. However, more efficient production processes, assisted by a greater understanding of the effect of infection on producer cells, are required. Combining [1,2-13C]glucose and [U-13C]glutamine, we apply for the first time 13C-Metabolic flux analysis (13C-MFA) to study E1-transformed Madin-Darby Canine Kidney (MDCK) cells meta...

  6. Vector mesons in dense matter and dilepton production in heavy ion collisions at intermediate energies

    Energy Technology Data Exchange (ETDEWEB)

    Santini, Elvira

    2008-02-15

    The vector meson spectral functions are calculated to the first order in the nuclear matter density assuming the dominant contribution comes from the couplings of the vector mesons to nucleons and nucleon resonances. An attempt is made to reproduce the HADES dilepton production data with the in-medium spectral functions of the vector mesons using the Relativistic Quantum Molecular Dynamics (RQMD) transport model developed earlier for modelling heavy-ion collisions. The results are sensitive to the in-medium broadening of nucleon resonances. A generally good agreement with the HADES data is achieved for selfconsistent treatment of the nucleon resonance broadening and the vector meson spectral functions. (orig.)

  7. Vector mesons in dense matter and dilepton production in heavy ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Santini, Elvira

    2008-01-01

    The vector meson spectral functions are calculated to the first order in the nuclear matter density assuming the dominant contribution comes from the couplings of the vector mesons to nucleons and nucleon resonances. An attempt is made to reproduce the HADES dilepton production data with the in-medium spectral functions of the vector mesons using the Relativistic Quantum Molecular Dynamics (RQMD) transport model developed earlier for modelling heavy-ion collisions. The results are sensitive to the in-medium broadening of nucleon resonances. A generally good agreement with the HADES data is achieved for selfconsistent treatment of the nucleon resonance broadening and the vector meson spectral functions. (orig.)

  8. In-place sparse suffix sorting

    DEFF Research Database (Denmark)

    Prezza, Nicola

    2018-01-01

    information regarding the lexicographical order of a size-b subset of all n text suffixes is often needed. Such information can be stored space-efficiently (in b words) in the sparse suffix array (SSA). The SSA and its relative sparse LCP array (SLCP) can be used as a space-efficient substitute of the sparse...... suffix tree. Very recently, Gawrychowski and Kociumaka [11] showed that the sparse suffix tree (and therefore SSA and SLCP) can be built in asymptotically optimal O(b) space with a Monte Carlo algorithm running in O(n) time. The main reason for using the SSA and SLCP arrays in place of the sparse suffix...... tree is, however, their reduced space of b words each. This leads naturally to the quest for in-place algorithms building these arrays. Franceschini and Muthukrishnan [8] showed that the full suffix array can be built in-place and in optimal running time. On the other hand, finding sub-quadratic in...

  9. Reduction of product platform complexity by vectorial Euclidean algorithm

    International Nuclear Information System (INIS)

    Navarrete, Israel Aguilera; Guzman, Alejandro A. Lozano

    2013-01-01

    In traditional machine, equipment and devices design, technical solutions are practically independent, thus increasing designs cost and complexity. Overcoming this situation has been tackled just using designer's experience. In this work, a product platform complexity reduction is presented based on a matrix representation of technical solutions versus product properties. This matrix represents the product platform. From this matrix, the Euclidean distances among technical solutions are obtained. Thus, the vectorial distances among technical solutions are identified in a new matrix of order of the number of technical solutions identified. This new matrix can be reorganized in groups with a hierarchical structure, in such a way that modular design of products is now more tractable. As a result of this procedure, the minimum vector distances are found thus being possible to identify the best technical solutions for the design problem raised. Application of these concepts is shown with two examples.

  10. Orthogonalisation of Vectors

    Indian Academy of Sciences (India)

    The Gram-Schmidt process is one of the first things one learns in a course ... We might want to stay as close to the experimental data as possible when converting these vectors to orthonormal ones demanded by the model. The process of finding the closest or- thonormal .... is obtained by writing the matrix A = [aI, an], then.

  11. Vector 33: A reduce program for vector algebra and calculus in orthogonal curvilinear coordinates

    Science.gov (United States)

    Harper, David

    1989-06-01

    This paper describes a package with enables REDUCE 3.3 to perform algebra and calculus operations upon vectors. Basic algebraic operations between vectors and between scalars and vectors are provided, including scalar (dot) product and vector (cross) product. The vector differential operators curl, divergence, gradient and Laplacian are also defined, and are valid in any orthogonal curvilinear coordinate system. The package is written in RLISP to allow algebra and calculus to be performed using notation identical to that for operations. Scalars and vectors can be mixed quite freely in the same expression. The package will be of interest to mathematicians, engineers and scientists who need to perform vector calculations in orthogonal curvilinear coordinates.

  12. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, R [University of Alberta, Edmonton, AB (Canada); Fallone, B [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada); MagnetTx Oncology Solutions, Edmonton, AB (Canada); St Aubin, J [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. The LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been

  13. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  14. Matrix Production, Pigment Synthesis, and Sporulation in a Marine Isolated Strain of Bacillus pumilus.

    Science.gov (United States)

    Di Luccia, Blanda; Riccio, Antonio; Vanacore, Adele; Baccigalupi, Loredana; Molinaro, Antonio; Ricca, Ezio

    2015-10-21

    The ability to produce an extracellular matrix and form multicellular communities is an adaptive behavior shared by many bacteria. In Bacillus subtilis, the model system for spore-forming bacteria, matrix production is one of the possible differentiation pathways that a cell can follow when vegetative growth is no longer feasible. While in B. subtilis the genetic system controlling matrix production has been studied in detail, it is still unclear whether other spore formers utilize similar mechanisms. We report that SF214, a pigmented strain of Bacillus pumilus isolated from the marine environment, can produce an extracellular matrix relying on orthologs of many of the genes known to be important for matrix synthesis in B. subtilis. We also report a characterization of the carbohydrates forming the extracellular matrix of strain SF214. The isolation and characterization of mutants altered in matrix synthesis, pigmentation, and spore formation suggest that in strain SF214 the three processes are strictly interconnected and regulated by a common molecular mechanism.

  15. Threshold partitioning of sparse matrices and applications to Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hwajeong; Szyld, D.B. [Temple Univ., Philadelphia, PA (United States)

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  16. Next-to-leading order QCD corrections to W+W- production via vector-boson fusion

    International Nuclear Information System (INIS)

    Jaeger, Barbara; Oleari, Carlo; Zeppenfeld, Dieter

    2006-01-01

    Vector-boson fusion processes constitute an important class of reactions at hadron colliders, both for signals and backgrounds of new physics in the electroweak interactions. We consider what is commonly referred to as W + W - production via vector-boson fusion (with subsequent leptonic decay of the Ws), or, more precisely, e + ν e μ - ν-bar μ + 2 jets production in proton-proton scattering, with all resonant and non-resonant Feynman diagrams and spin correlations of the final-state leptons included, in the phase-space regions which are dominated by t-channel electroweak-boson exchange. We compute the next-to-leading order QCD corrections to this process, at order α 6 α s . The QCD corrections are modest, changing total cross sections by less than 10%. Remaining scale uncertainties are below 2%. A fully-flexible next-to-leading order partonic Monte Carlo program allows to demonstrate these features for cross sections within typical vector-boson-fusion acceptance cuts. Modest corrections are also found for distributions

  17. Support of the extremal measure in a vector equilibrium problem

    International Nuclear Information System (INIS)

    Lapik, M A

    2006-01-01

    A generalization of the Mhaskar-Saff functional is obtained for a vector equilibrium problem with an external field. As an application, the supports of the equilibrium measures are found in a special vector equilibrium problem with Nikishin matrix.

  18. Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations

    Science.gov (United States)

    Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.

  19. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  20. Measuring nuclear transparency from exclusive vector meson production in lepton-nucleus scattering

    International Nuclear Information System (INIS)

    Fang, G.Y.

    1994-01-01

    Preliminary results on the measurement of nuclear transparencies from exclusive ρ 0 meson production from E665 at Fermilab are reported. The data were collected on hydrogen, deuterium, carbon, calcium, and lead targets with a mean beam energy of 470 GeV. Increases in the transparencies are observed in both coherent and incoherent production channels as the virtuality of the photon increases, as expected of color transparency. Ideas of systematic studies of color transparency in exclusive vector meson production at CEBAF are discussed

  1. Transfer matrices and excitations with matrix product states

    International Nuclear Information System (INIS)

    Zauner, V; Rams, M M; Verstraete, F; Draxler, D; Vanderstraeten, L; Degroote, M; Haegeman, J; Stojevic, V; Schuch, N

    2015-01-01

    We use the formalism of tensor network states to investigate the relation between static correlation functions in the ground state of local quantum many-body Hamiltonians and the dispersion relations of the corresponding low-energy excitations. In particular, we show that the matrix product state transfer matrix (MPS-TM)—a central object in the computation of static correlation functions—provides important information about the location and magnitude of the minima of the low-energy dispersion relation(s), and we present supporting numerical data for one-dimensional lattice and continuum models as well as two-dimensional lattice models on a cylinder. We elaborate on the peculiar structure of the MPS-TM’s eigenspectrum and give several arguments for the close relation between the structure of the low-energy spectrum of the system and the form of the static correlation functions. Finally, we discuss how the MPS-TM connects to the exact quantum transfer matrix of the model at zero temperature. We present a renormalization group argument for obtaining finite bond dimension approximations of the MPS, which allows one to reinterpret variational MPS techniques (such as the density matrix renormalization group) as an application of Wilson’s numerical renormalization group along the virtual (imaginary time) dimension of the system. (paper)

  2. Low Complexity Submatrix Divided MMSE Sparse-SQRD Detection for MIMO-OFDM with ESPAR Antenna Receiver

    Directory of Open Access Journals (Sweden)

    Diego Javier Reinoso Chisaguano

    2013-01-01

    Full Text Available Multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM with an electronically steerable passive array radiator (ESPAR antenna receiver can improve the bit error rate performance and obtains additional diversity gain without increasing the number of Radio Frequency (RF front-end circuits. However, due to the large size of the channel matrix, the computational cost required for the detection process using Vertical-Bell Laboratories Layered Space-Time (V-BLAST detection is too high to be implemented. Using the minimum mean square error sparse-sorted QR decomposition (MMSE sparse-SQRD algorithm for the detection process the average computational cost can be considerably reduced but is still higher compared with a conventional MIMOOFDM system without ESPAR antenna receiver. In this paper, we propose to use a low complexity submatrix divided MMSE sparse-SQRD algorithm for the detection process of MIMOOFDM with ESPAR antenna receiver. The computational cost analysis and simulation results show that on average the proposed scheme can further reduce the computational cost and achieve a complexity comparable to the conventional MIMO-OFDM detection schemes.

  3. Solving Sparse Polynomial Optimization Problems with Chordal Structure Using the Sparse, Bounded-Degree Sum-of-Squares Hierarchy

    NARCIS (Netherlands)

    Marandi, Ahmadreza; de Klerk, Etienne; Dahl, Joachim

    The sparse bounded degree sum-of-squares (sparse-BSOS) hierarchy of Weisser, Lasserre and Toh [arXiv:1607.01151,2016] constructs a sequence of lower bounds for a sparse polynomial optimization problem. Under some assumptions, it is proven by the authors that the sequence converges to the optimal

  4. Nonleachable Imidazolium-Incorporated Composite for Disruption of Bacterial Clustering, Exopolysaccharide-Matrix Assembly, and Enhanced Biofilm Removal.

    Science.gov (United States)

    Hwang, Geelsu; Koltisko, Bernard; Jin, Xiaoming; Koo, Hyun

    2017-11-08

    Surface-grown bacteria and production of an extracellular polymeric matrix modulate the assembly of highly cohesive and firmly attached biofilms, making them difficult to remove from solid surfaces. Inhibition of cell growth and inactivation of matrix-producing bacteria can impair biofilm formation and facilitate removal. Here, we developed a novel nonleachable antibacterial composite with potent antibiofilm activity by directly incorporating polymerizable imidazolium-containing resin (antibacterial resin with carbonate linkage; ABR-C) into a methacrylate-based scaffold (ABR-modified composite; ABR-MC) using an efficient yet simplified chemistry. Low-dose inclusion of imidazolium moiety (∼2 wt %) resulted in bioactivity with minimal cytotoxicity without compromising mechanical integrity of the restorative material. The antibiofilm properties of ABR-MC were assessed using an exopolysaccharide-matrix-producing (EPS-matrix-producing) oral pathogen (Streptococcus mutans) in an experimental biofilm model. Using high-resolution confocal fluorescence imaging and biophysical methods, we observed remarkable disruption of bacterial accumulation and defective 3D matrix structure on the surface of ABR-MC. Specifically, the antibacterial composite impaired the ability of S. mutans to form organized bacterial clusters on the surface, resulting in altered biofilm architecture with sparse cell accumulation and reduced amounts of EPS matrix (versus control composite). Biofilm topology analyses on the control composite revealed a highly organized and weblike EPS structure that tethers the bacterial clusters to each other and to the surface, forming a highly cohesive unit. In contrast, such a structured matrix was absent on the surface of ABR-MC with mostly sparse and amorphous EPS, indicating disruption in the biofilm physical stability. Consistent with lack of structural organization, the defective biofilm on the surface of ABR-MC was readily detached when subjected to low shear

  5. Search for single production of a vector-like T quark decaying into a top quark and a Higgs boson

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Daniel; Marchesini, Ivan; Nowatschin, Dominik; Schmidt, Alexander; Schumann, Svenja; Tholen, Heiner; Usai, Emanuele [Universitaet Hamburg (Germany). Institut fuer Experimentalphysik

    2016-07-01

    We search for singly produced vector-like top quark partners (T) in pp-collisions at √(s)=13 TeV with the CMS experiment. Several BSM models, such as composite Higgs and extra dimensions, predict vector-like quarks to be accessible at the LHC. At 13 TeV, single production of vector-like quarks might be enhanced over pair production, depending on the coupling parameters for the individual interactions. In this analysis, we target the decay of the vector-like heavy T quark into a Higgs boson and a top quark, where the top quark decay includes a lepton. Higgs-boson candidates are reconstructed using new methods to resolve the substructure of boosted jets and top-quark candidates are formed by combining leptons, missing transverse energy and jets. With the top-quark and Higgs-boson candidates, we aim for the complete reconstruction of the four-vector of the new particle in question. The largest fraction of the background is contributed through the top-quark pair production process. First results on the search for single vector-like top partners at 13 TeV are presented.

  6. Production rates of strange vector mesons at the Z0 resonance

    Energy Technology Data Exchange (ETDEWEB)

    Dima, Mihai O. [Stanford Univ., CA (United States)

    1997-05-01

    This dissertation presents a study of strange vector meson production, "leading particle" effect and a first direct measurement of the strangeness suppression parameter in hadronic decays of the neutral electroweak boson, Z. The measurements were performed in e+e- collisions at the Stanford Linear Accelerator Center (SLAC) with the SLC Large Detector (SLD) experiment. A new generation particle ID system, the SLD Cerenkov Ring Imaging Detector (CRID) is used to discriminate kaons from pions, enabling the reconstruction of the vector mesons over a wide momentum range. The inclusive production rates of ρ and K*0 and the differential rates versus momentum were measured and are compared with those of other experiments and theoretical predictions. The high longitudinal polarisation of the SLC electron beam is used in conjunction with the electroweak quark production asymmetries to separate quark jets from antiquark jets. K*0 production is studied separately in these samples, and the results show evidence for the "leading particle" effect. The difference between K*0 production rates at high momentum in quark and antiquark jets yields a first direct measurement of strangeness suppression in jet fragmentation.

  7. Reduction of Under-Determined Linear Systems by Sparce Block Matrix Technique

    DEFF Research Database (Denmark)

    Tarp-Johansen, Niels Jacob; Poulsen, Peter Noe; Damkilde, Lars

    1996-01-01

    numerical stability of the aforementioned reduction. Moreover the coefficient matrix for the equilibrium equations is typically very sparse. The objective is to deal efficiently with the full pivoting reduction of sparse rectangular matrices using a dynamic storage scheme based on the block matrix concept.......Under-determined linear equation systems occur in different engineering applications. In structural engineering they typically appear when applying the force method. As an example one could mention limit load analysis based on The Lower Bound Theorem. In this application there is a set of under......-determined equilibrium equation restrictions in an LP-problem. A significant reduction of computer time spent on solving the LP-problem is achieved if the equilib rium equations are reduced before going into the optimization procedure. Experience has shown that for some structures one must apply full pivoting to ensure...

  8. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    Science.gov (United States)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  9. A set of ligation-independent in vitro translation vectors for eukaryotic protein production

    Directory of Open Access Journals (Sweden)

    Endo Yaeta

    2008-03-01

    Full Text Available Abstract Background The last decade has brought the renaissance of protein studies and accelerated the development of high-throughput methods in all aspects of proteomics. Presently, most protein synthesis systems exploit the capacity of living cells to translate proteins, but their application is limited by several factors. A more flexible alternative protein production method is the cell-free in vitro protein translation. Currently available in vitro translation systems are suitable for high-throughput robotic protein production, fulfilling the requirements of proteomics studies. Wheat germ extract based in vitro translation system is likely the most promising method, since numerous eukaryotic proteins can be cost-efficiently synthesized in their native folded form. Although currently available vectors for wheat embryo in vitro translation systems ensure high productivity, they do not meet the requirements of state-of-the-art proteomics. Target genes have to be inserted using restriction endonucleases and the plasmids do not encode cleavable affinity purification tags. Results We designed four ligation independent cloning (LIC vectors for wheat germ extract based in vitro protein translation. In these constructs, the RNA transcription is driven by T7 or SP6 phage polymerase and two TEV protease cleavable affinity tags can be added to aid protein purification. To evaluate our improved vectors, a plant mitogen activated protein kinase was cloned in all four constructs. Purification of this eukaryotic protein kinase demonstrated that all constructs functioned as intended: insertion of PCR fragment by LIC worked efficiently, affinity purification of translated proteins by GST-Sepharose or MagneHis particles resulted in high purity kinase, and the affinity tags could efficiently be removed under different reaction conditions. Furthermore, high in vitro kinase activity testified of proper folding of the purified protein. Conclusion Four newly

  10. Production of recombinant AAV vectors encoding insulin-like growth factor I is enhanced by interaction among AAV rep regulatory sequences

    Directory of Open Access Journals (Sweden)

    Dilley Robert

    2009-01-01

    Full Text Available Abstract Background Adeno-associated virus (AAV vectors are promising tools for gene therapy. Currently, their potential is limited by difficulties in producing high vector yields with which to generate transgene protein product. AAV vector production depends in part upon the replication (Rep proteins required for viral replication. We tested the hypothesis that mutations in the start codon and upstream regulatory elements of Rep78/68 in AAV helper plasmids can regulate recombinant AAV (rAAV vector production. We further tested whether the resulting rAAV vector preparation augments the production of the potentially therapeutic transgene, insulin-like growth factor I (IGF-I. Results We constructed a series of AAV helper plasmids containing different Rep78/68 start codon in combination with different gene regulatory sequences. rAAV vectors carrying the human IGF-I gene were prepared with these vectors and the vector preparations used to transduce HT1080 target cells. We found that the substitution of ATG by ACG in the Rep78/68 start codon in an AAV helper plasmid (pAAV-RC eliminated Rep78/68 translation, rAAV and IGF-I production. Replacement of the heterologous sequence upstream of Rep78/68 in pAAV-RC with the AAV2 endogenous p5 promoter restored translational activity to the ACG mutant, and restored rAAV and IGF-I production. Insertion of the AAV2 p19 promoter sequence into pAAV-RC in front of the heterologous sequence also enabled ACG to function as a start codon for Rep78/68 translation. The data further indicate that the function of the AAV helper construct (pAAV-RC, that is in current widespread use for rAAV production, may be improved by replacement of its AAV2 unrelated heterologous sequence with the native AAV2 p5 promoter. Conclusion Taken together, the data demonstrate an interplay between the start codon and upstream regulatory sequences in the regulation of Rep78/68 and indicate that selective mutations in Rep78/68 regulatory elements

  11. Measuring nuclear transparency from exclusive vector meson production in lepton-nucleus scattering

    Energy Technology Data Exchange (ETDEWEB)

    Fang, G.Y. [Harvard Univ., Cambridge, MA (United States)

    1994-04-01

    Preliminary results on the measurement of nuclear transparencies from exclusive {rho}{sup 0} meson production from E665 at Fermilab are reported. The data were collected on hydrogen, deuterium, carbon, calcium, and lead targets with a mean beam energy of 470 GeV. Increases in the transparencies are observed in both coherent and incoherent production channels as the virtuality of the photon increases, as expected of color transparency. Ideas of systematic studies of color transparency in exclusive vector meson production at CEBAF are discussed.

  12. Gamow state vectors as functionals over subspaces of the nuclear space

    International Nuclear Information System (INIS)

    Bohm, A.

    1979-12-01

    Exponentially decaying Gamow state vectors are obtained from S-matrix poles in the lower half of the second sheet, and are defined as functionals over a subspace of the nuclear space, PHI. Exponentially growing Gamow state vectors are obtained from S-matrix poles in the upper half of the second sheet, and are defined as functionals over another subspace of PHI. On functionals over these two subspaces the dynamical group of time development splits into two semigroups

  13. Comparison of two matrix data structures for advanced CSM testbed applications

    Science.gov (United States)

    Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.

    1989-01-01

    The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.

  14. Relevance vector machine technique for the inverse scattering problem

    International Nuclear Information System (INIS)

    Wang Fang-Fang; Zhang Ye-Rong

    2012-01-01

    A novel method based on the relevance vector machine (RVM) for the inverse scattering problem is presented in this paper. The nonlinearity and the ill-posedness inherent in this problem are simultaneously considered. The nonlinearity is embodied in the relation between the scattered field and the target property, which can be obtained through the RVM training process. Besides, rather than utilizing regularization, the ill-posed nature of the inversion is naturally accounted for because the RVM can produce a probabilistic output. Simulation results reveal that the proposed RVM-based approach can provide comparative performances in terms of accuracy, convergence, robustness, generalization, and improved performance in terms of sparse property in comparison with the support vector machine (SVM) based approach. (general)

  15. Distribution amplitudes of vector mesons

    Energy Technology Data Exchange (ETDEWEB)

    Braun, V.M. [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Broemmel, D. [Deutsches Elektronen-Synchrotron, Hamburg (Germany); Goeckeler, M. [Regensburg Univ. (DE). Inst. fuer Theoretische Physik] (and others)

    2007-11-15

    Results are presented for the lowest moment of the distribution amplitude for the K{sup *} vector meson. Both longitudinal and transverse moments are investigated. We use two flavours of O(a) improved Wilson fermions, together with a non-perturbative renormalisation of the matrix element. (orig.)

  16. Quantum phase transitions in matrix product states of one-dimensional spin-1 chains

    International Nuclear Information System (INIS)

    Zhu Jingmin

    2014-01-01

    We present a new model of quantum phase transitions in matrix product systems of one-dimensional spin-1 chains and study the phases coexistence phenomenon. We find that in the thermodynamic limit the proposed system has three different quantum phases and by adjusting the control parameters we are able to realize any phase, any two phases equal coexistence and the three phases equal coexistence. At every critical point the physical quantities including the entanglement are not discontinuous and the matrix product system has long-range correlation and N-spin maximal entanglement. We believe that our work is helpful for having a comprehensive understanding of quantum phase transitions in matrix product states of one-dimensional spin chains and of certain directive significance to the preparation and control of one-dimensional spin lattice models with stable coherence and N-spin maximal entanglement. (author)

  17. An NoC Traffic Compiler for Efficient FPGA Implementation of Sparse Graph-Oriented Workloads

    Directory of Open Access Journals (Sweden)

    Nachiket Kapre

    2011-01-01

    synchronization to optimize our workloads for large networks up to 2025 parallel elements for BSP model and 25 parallel elements for Token Dataflow. This allows us to demonstrate speedups between 1.2× and 22× (3.5× mean, area reductions (number of Processing Elements between 3× and 15× (9× mean and dynamic energy savings between 2× and 3.5× (2.7× mean over a range of real-world graph applications in the BSP compute model. We deliver speedups of 0.5–13× (geomean 3.6× for Sparse Direct Matrix Solve (Token Dataflow compute model applied to a range of sparse matrices when using a high-quality placement algorithm. We expect such traffic optimization tools and techniques to become an essential part of the NoC application-mapping flow.

  18. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Meljanac, Daniel [Ruder Boskovic Institute, Division of Materials Physics, Zagreb (Croatia); Meljanac, Stjepan; Pikutic, Danijel [Ruder Boskovic Institute, Division of Theoretical Physics, Zagreb (Croatia)

    2017-12-15

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincare-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ-Minkowski spaces and (iii) κ-Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed. (orig.)

  19. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    International Nuclear Information System (INIS)

    Meljanac, Daniel; Meljanac, Stjepan; Pikutic, Danijel

    2017-01-01

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincare-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ-Minkowski spaces and (iii) κ-Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed. (orig.)

  20. Families of vector-like deformations of relativistic quantum phase spaces, twists and symmetries

    Science.gov (United States)

    Meljanac, Daniel; Meljanac, Stjepan; Pikutić, Danijel

    2017-12-01

    Families of vector-like deformed relativistic quantum phase spaces and corresponding realizations are analyzed. A method for a general construction of the star product is presented. The corresponding twist, expressed in terms of phase space coordinates, in the Hopf algebroid sense is presented. General linear realizations are considered and corresponding twists, in terms of momenta and Poincaré-Weyl generators or gl(n) generators are constructed and R-matrix is discussed. A classification of linear realizations leading to vector-like deformed phase spaces is given. There are three types of spaces: (i) commutative spaces, (ii) κ -Minkowski spaces and (iii) κ -Snyder spaces. The corresponding star products are (i) associative and commutative (but non-local), (ii) associative and non-commutative and (iii) non-associative and non-commutative, respectively. Twisted symmetry algebras are considered. Transposed twists and left-right dual algebras are presented. Finally, some physical applications are discussed.

  1. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan; Cui, Xuefeng; Yu, Ge; Guo, Lili; Gao, Xin

    2017-01-01

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays

  2. Matrix-product states for strongly correlated systems and quantum information processing

    International Nuclear Information System (INIS)

    Saberi, Hamed

    2008-01-01

    This thesis offers new developments in matrix-product state theory for studying the strongly correlated systems and quantum information processing through three major projects: In the first project, we perform a systematic comparison between Wilson's numerical renormalization group (NRG) and White's density-matrix renormalization group (DMRG). The NRG method for solving quantum impurity models yields a set of energy eigenstates that have the form of matrix-product states (MPS). White's DMRG for treating quantum lattice problems can likewise be reformulated in terms of MPS. Thus, the latter constitute a common algebraic structure for both approaches. We exploit this fact to compare the NRG approach for the single-impurity Anderson model to a variational matrix-product state approach (VMPS), equivalent to single-site DMRG. For the latter, we use an ''unfolded'' Wilson chain, which brings about a significant reduction in numerical costs compared to those of NRG. We show that all NRG eigenstates (kept and discarded) can be reproduced using VMPS, and compare the difference in truncation criteria, sharp vs. smooth in energy space, of the two approaches. Finally, we demonstrate that NRG results can be improved upon systematically by performing a variational optimization in the space of variational matrix-product states, using the states produced by NRG as input. In the second project we demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give

  3. Matrix-product states for strongly correlated systems and quantum information processing

    Energy Technology Data Exchange (ETDEWEB)

    Saberi, Hamed

    2008-12-12

    This thesis offers new developments in matrix-product state theory for studying the strongly correlated systems and quantum information processing through three major projects: In the first project, we perform a systematic comparison between Wilson's numerical renormalization group (NRG) and White's density-matrix renormalization group (DMRG). The NRG method for solving quantum impurity models yields a set of energy eigenstates that have the form of matrix-product states (MPS). White's DMRG for treating quantum lattice problems can likewise be reformulated in terms of MPS. Thus, the latter constitute a common algebraic structure for both approaches. We exploit this fact to compare the NRG approach for the single-impurity Anderson model to a variational matrix-product state approach (VMPS), equivalent to single-site DMRG. For the latter, we use an ''unfolded'' Wilson chain, which brings about a significant reduction in numerical costs compared to those of NRG. We show that all NRG eigenstates (kept and discarded) can be reproduced using VMPS, and compare the difference in truncation criteria, sharp vs. smooth in energy space, of the two approaches. Finally, we demonstrate that NRG results can be improved upon systematically by performing a variational optimization in the space of variational matrix-product states, using the states produced by NRG as input. In the second project we demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any

  4. Estimation in a multiplicative mixed model involving a genetic relationship matrix

    Directory of Open Access Journals (Sweden)

    Eccleston John A

    2009-04-01

    Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.

  5. Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning.

    Science.gov (United States)

    Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D

    2013-02-01

    Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.

  6. Highway Travel Time Prediction Using Sparse Tensor Completion Tactics and K-Nearest Neighbor Pattern Matching Method

    Directory of Open Access Journals (Sweden)

    Jiandong Zhao

    2018-01-01

    Full Text Available Remote transportation microwave sensor (RTMS technology is being promoted for China’s highways. The distance is about 2 to 5 km between RTMSs, which leads to missing data and data sparseness problems. These two problems seriously restrict the accuracy of travel time prediction. Aiming at the data-missing problem, based on traffic multimode characteristics, a tensor completion method is proposed to recover the lost RTMS speed and volume data. Aiming at the data sparseness problem, virtual sensor nodes are set up between real RTMS nodes, and the two-dimensional linear interpolation and piecewise method are applied to estimate the average travel time between two nodes. Next, compared with the traditional K-nearest neighbor method, an optimal KNN method is proposed for travel time prediction. optimization is made in three aspects. Firstly, the three original state vectors, that is, speed, volume, and time of the day, are subdivided into seven periods. Secondly, the traffic congestion level is added as a new state vector. Thirdly, the cross-validation method is used to calibrate the K value to improve the adaptability of the KNN algorithm. Based on the data collected from Jinggangao highway, all the algorithms are validated. The results show that the proposed method can improve data quality and prediction precision of travel time.

  7. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  8. Trigonometric bases for matrix weighted Lp-spaces

    DEFF Research Database (Denmark)

    Nielsen, Morten

    2010-01-01

    We give a complete characterization of 2π-periodic matrix weights W for which the vector-valued trigonometric system forms a Schauder basis for the matrix weighted space Lp(T;W). Then trigonometric quasi-greedy bases for Lp(T;W) are considered. Quasi-greedy bases are systems for which the simple...

  9. Viral vectors for production of recombinant proteins in plants.

    Science.gov (United States)

    Lico, Chiara; Chen, Qiang; Santi, Luca

    2008-08-01

    Global demand for recombinant proteins has steadily accelerated for the last 20 years. These recombinant proteins have a wide range of important applications, including vaccines and therapeutics for human and animal health, industrial enzymes, new materials and components of novel nano-particles for various applications. The majority of recombinant proteins are produced by traditional biological "factories," that is, predominantly mammalian and microbial cell cultures along with yeast and insect cells. However, these traditional technologies cannot satisfy the increasing market demand due to prohibitive capital investment requirements. During the last two decades, plants have been under intensive investigation to provide an alternative system for cost-effective, highly scalable, and safe production of recombinant proteins. Although the genetic engineering of plant viral vectors for heterologous gene expression can be dated back to the early 1980s, recent understanding of plant virology and technical progress in molecular biology have allowed for significant improvements and fine tuning of these vectors. These breakthroughs enable the flourishing of a variety of new viral-based expression systems and their wide application by academic and industry groups. In this review, we describe the principal plant viral-based production strategies and the latest plant viral expression systems, with a particular focus on the variety of proteins produced and their applications. We will summarize the recent progress in the downstream processing of plant materials for efficient extraction and purification of recombinant proteins. (c) 2008 Wiley-Liss, Inc.

  10. Sparse Image Reconstruction in Computed Tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer

    In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... applications. This thesis takes a systematic approach toward establishing quantitative understanding of conditions for sparse reconstruction to work well in CT. A general framework for analyzing sparse reconstruction methods in CT is introduced and two sets of computational tools are proposed: 1...... contributions to a general set of computational characterization tools. Thus, the thesis contributions help advance sparse reconstruction methods toward routine use in...

  11. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin; Luo, Ruiyan; Carroll, Raymond J.; Zhao, Hongyu

    2015-01-01

    predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths

  12. Global quantum discord and matrix product density operators

    Science.gov (United States)

    Huang, Hai-Lin; Cheng, Hong-Guang; Guo, Xiao; Zhang, Duo; Wu, Yuyin; Xu, Jian; Sun, Zhao-Yu

    2018-06-01

    In a previous study, we have proposed a procedure to study global quantum discord in 1D chains whose ground states are described by matrix product states [Z.-Y. Sun et al., Ann. Phys. 359, 115 (2015)]. In this paper, we show that with a very simple generalization, the procedure can be used to investigate quantum mixed states described by matrix product density operators, such as quantum chains at finite temperatures and 1D subchains in high-dimensional lattices. As an example, we study the global discord in the ground state of a 2D transverse-field Ising lattice, and pay our attention to the scaling behavior of global discord in 1D sub-chains of the lattice. We find that, for any strength of the magnetic field, global discord always shows a linear scaling behavior as the increase of the length of the sub-chains. In addition, global discord and the so-called "discord density" can be used to indicate the quantum phase transition in the model. Furthermore, based upon our numerical results, we make some reliable predictions about the scaling of global discord defined on the n × n sub-squares in the lattice.

  13. Target product profile choices for intra-domiciliary malaria vector control pesticide products: repel or kill?

    Directory of Open Access Journals (Sweden)

    Moore Sarah J

    2011-07-01

    Full Text Available Abstract Background The most common pesticide products for controlling malaria-transmitting mosquitoes combine two distinct modes of action: 1 conventional insecticidal activity which kills mosquitoes exposed to the pesticide and 2 deterrence of mosquitoes away from protected humans. While deterrence enhances personal or household protection of long-lasting insecticidal nets and indoor residual sprays, it may also attenuate or even reverse communal protection if it diverts mosquitoes to non-users rather than killing them outright. Methods A process-explicit model of malaria transmission is described which captures the sequential interaction between deterrent and toxic actions of vector control pesticides and accounts for the distinctive impacts of toxic activities which kill mosquitoes before or after they have fed upon the occupant of a covered house or sleeping space. Results Increasing deterrency increases personal protection but consistently reduces communal protection because deterrent sub-lethal exposure inevitably reduces the proportion subsequently exposed to higher lethal doses. If the high coverage targets of the World Health Organization are achieved, purely toxic products with no deterrence are predicted to generally provide superior protection to non-users and even users, especially where vectors feed exclusively on humans and a substantial amount of transmission occurs outdoors. Remarkably, this is even the case if that product confers no personal protection and only kills mosquitoes after they have fed. Conclusions Products with purely mosquito-toxic profiles may, therefore, be preferable for programmes with universal coverage targets, rather than those with equivalent toxicity but which also have higher deterrence. However, if purely mosquito-toxic products confer little personal protection because they do not deter mosquitoes and only kill them after they have fed, then they will require aggressive "catch up" campaigns, with

  14. Direction-of-Arrival Estimation for Coherent Sources via Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Zhang-Meng Liu

    2014-01-01

    Full Text Available A spatial filtering-based relevance vector machine (RVM is proposed in this paper to separate coherent sources and estimate their directions-of-arrival (DOA, with the filter parameters and DOA estimates initialized and refined via sparse Bayesian learning. The RVM is used to exploit the spatial sparsity of the incident signals and gain improved adaptability to much demanding scenarios, such as low signal-to-noise ratio (SNR, limited snapshots, and spatially adjacent sources, and the spatial filters are introduced to enhance global convergence of the original RVM in the case of coherent sources. The proposed method adapts to arbitrary array geometry, and simulation results show that it surpasses the existing methods in DOA estimation performance.

  15. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    Science.gov (United States)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  16. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Frank E., E-mail: harris@qtp.ufl.edu [Department of Physics, University of Utah, Salt Lake City, Utah 84112, USA and Quantum Theory Project, University of Florida, P.O. Box 118435, Gainesville, Florida 32611 (United States)

    2016-05-28

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance r{sub ij}. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validated by showing that they yield correct results for a large number of integrals published by other investigators.

  17. High-titer recombinant adeno-associated virus production utilizing a recombinant herpes simplex virus type I vector expressing AAV-2 Rep and Cap.

    Science.gov (United States)

    Conway, J E; Rhys, C M; Zolotukhin, I; Zolotukhin, S; Muzyczka, N; Hayward, G S; Byrne, B J

    1999-06-01

    Recombinant adeno-associated virus type 2 (rAAV) vectors have recently been used to achieve long-term, high level transduction in vivo. Further development of rAAV vectors for clinical use requires significant technological improvements in large-scale vector production. In order to facilitate the production of rAAV vectors, a recombinant herpes simplex virus type I vector (rHSV-1) which does not produce ICP27, has been engineered to express the AAV-2 rep and cap genes. The optimal dose of this vector, d27.1-rc, for AAV production has been determined and results in a yield of 380 expression units (EU) of AAV-GFP produced from 293 cells following transfection with AAV-GFP plasmid DNA. In addition, d27.1-rc was also efficient at producing rAAV from cell lines that have an integrated AAV-GFP provirus. Up to 480 EU/cell of AAV-GFP could be produced from the cell line GFP-92, a proviral, 293 derived cell line. Effective amplification of rAAV vectors introduced into 293 cells by infection was also demonstrated. Passage of rAAV with d27. 1-rc results in up to 200-fold amplification of AAV-GFP with each passage after coinfection of the vectors. Efficient, large-scale production (>109 cells) of AAV-GFP from a proviral cell line was also achieved and these stocks were free of replication-competent AAV. The described rHSV-1 vector provides a novel, simple and flexible way to introduce the AAV-2 rep and cap genes and helper virus functions required to produce high-titer rAAV preparations from any rAAV proviral construct. The efficiency and potential for scalable delivery of d27.1-rc to producer cell cultures should facilitate the production of sufficient quantities of rAAV vectors for clinical application.

  18. Filtering and smoothing of stae vector for diffuse state space models

    NARCIS (Netherlands)

    Koopman, S.J.; Durbin, J.

    2003-01-01

    This paper presents exact recursions for calculating the mean and mean square error matrix of the state vector given the observations for the multi-variate linear Gaussian state-space model in the case where the initial state vector is (partially) diffuse.

  19. Sparse decompositions in 'incoherent' dictionaries

    DEFF Research Database (Denmark)

    Gribonval, R.; Nielsen, Morten

    2003-01-01

    a unique sparse representation in such a dictionary. In particular, it is proved that the result of Donoho and Huo, concerning the replacement of a combinatorial optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may...

  20. Progress on adenovirus-vectored universal influenza vaccines.

    Science.gov (United States)

    Xiang, Kui; Ying, Guan; Yan, Zhou; Shanshan, Yan; Lei, Zhang; Hongjun, Li; Maosheng, Sun

    2015-01-01

    Influenza virus (IFV) infection causes serious health problems and heavy financial burdens each year worldwide. The classical inactivated influenza virus vaccine (IIVV) and live attenuated influenza vaccine (LAIV) must be updated regularly to match the new strains that evolve due to antigenic drift and antigenic shift. However, with the discovery of broadly neutralizing antibodies that recognize conserved antigens, and the CD8(+) T cell responses targeting viral internal proteins nucleoprotein (NP), matrix protein 1 (M1) and polymerase basic 1 (PB1), it is possible to develop a universal influenza vaccine based on the conserved hemagglutinin (HA) stem, NP, and matrix proteins. Recombinant adenovirus (rAd) is an ideal influenza vaccine vector because it has an ideal stability and safety profile, induces balanced humoral and cell-mediated immune responses due to activation of innate immunity, provides 'self-adjuvanting' activity, can mimic natural IFV infection, and confers seamless protection against mucosal pathogens. Moreover, this vector can be developed as a low-cost, rapid-response vaccine that can be quickly manufactured. Therefore, an adenovirus vector encoding conserved influenza antigens holds promise in the development of a universal influenza vaccine. This review will summarize the progress in adenovirus-vectored universal flu vaccines and discuss future novel approaches.

  1. Face recognition based on two-dimensional discriminant sparse preserving projection

    Science.gov (United States)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  2. Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature

    International Nuclear Information System (INIS)

    Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.

    2009-01-01

    We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), and (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.

  3. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  4. Orthogonal sparse linear discriminant analysis

    Science.gov (United States)

    Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun

    2018-03-01

    Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.

  5. A Current Control Approach for an Abnormal Grid Supplied Ultra Sparse Z-Source Matrix Converter with a Particle Swarm Optimization Proportional-Integral Induction Motor Drive Controller

    Directory of Open Access Journals (Sweden)

    Seyed Sina Sebtahmadi

    2016-11-01

    Full Text Available A rotational d-q current control scheme based on a Particle Swarm Optimization- Proportional-Integral (PSO-PI controller, is used to drive an induction motor (IM through an Ultra Sparse Z-source Matrix Converter (USZSMC. To minimize the overall size of the system, the lowest feasible values of Z-source elements are calculated by considering the both timing and aspects of the circuit. A meta-heuristic method is integrated to the control system in order to find optimal coefficient values in a single multimodal problem. Henceforth, the effect of all coefficients in minimizing the total harmonic distortion (THD and balancing the stator current are considered simultaneously. Through changing the reference point of magnitude or frequency, the modulation index can be automatically adjusted and respond to changes without heavy computational cost. The focus of this research is on a reliable and lightweight system with low computational resources. The proposed scheme is validated through both simulation and experimental results.

  6. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  7. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  8. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  9. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  10. Construction and decomposition of biorthogonal vector-valued wavelets with compact support

    International Nuclear Information System (INIS)

    Chen Qingjiang; Cao Huaixin; Shi Zhi

    2009-01-01

    In this article, we introduce vector-valued multiresolution analysis and the biorthogonal vector-valued wavelets with four-scale. The existence of a class of biorthogonal vector-valued wavelets with compact support associated with a pair of biorthogonal vector-valued scaling functions with compact support is discussed. A method for designing a class of biorthogonal compactly supported vector-valued wavelets with four-scale is proposed by virtue of multiresolution analysis and matrix theory. The biorthogonality properties concerning vector-valued wavelet packets are characterized with the aid of time-frequency analysis method and operator theory. Three biorthogonality formulas regarding them are presented.

  11. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  12. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  13. Vector-matrix-quaternion, array and arithmetic packages: All HAL/S functions implemented in Ada

    Science.gov (United States)

    Klumpp, Allan R.; Kwong, David D.

    1986-01-01

    The HAL/S avionics programmers have enjoyed a variety of tools built into a language tailored to their special requirements. Ada is designed for a broader group of applications. Rather than providing built-in tools, Ada provides the elements with which users can build their own. Standard avionic packages remain to be developed. These must enable programmers to code in Ada as they have coded in HAL/S. The packages under development at JPL will provide all of the vector-matrix, array, and arithmetic functions described in the HAL/S manuals. In addition, the linear algebra package will provide all of the quaternion functions used in Shuttle steering and Galileo attitude control. Furthermore, using Ada's extensibility, many quaternion functions are being implemented as infix operations; equivalent capabilities were never implemented in HAL/S because doing so would entail modifying the compiler and expanding the language. With these packages, many HAL/S expressions will compile and execute in Ada, unchanged. Others can be converted simply by replacing the implicit HAL/S multiply operator with the Ada *. Errors will be trapped and identified. Input/output will be convenient and readable.

  14. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Vascular Canals in Permanent Hyaline Cartilage: Development, Corrosion of Nonmineralized Cartilage Matrix, and Removal of Matrix Degradation Products.

    Science.gov (United States)

    Gabner, Simone; Häusler, Gabriele; Böck, Peter

    2017-06-01

    Core areas in voluminous pieces of permanent cartilage are metabolically supplied via vascular canals (VCs). We studied cartilage corrosion and removal of matrix degradation products during the development of VCs in nose and rib cartilage of piglets. Conventional staining methods were used for glycosaminoglycans, immunohistochemistry was performed to demonstrate collagens types I and II, laminin, Ki-67, von Willebrand factor, VEGF, macrophage marker MAC387, S-100 protein, MMPs -2,-9,-13,-14, and their inhibitors TIMP1 and TIMP2. VCs derived from connective tissue buds that bulged into cartilage matrix ("perichondrial papillae", PPs). Matrix was corroded at the tips of PPs or resulting VCs. Connective tissue stromata in PPs and VCs comprised an axial afferent blood vessel, peripherally located wide capillaries, fibroblasts, newly synthesized matrix, and residues of corroded cartilage matrix (collagen type II, acidic proteoglycans). Multinucleated chondroclasts were absent, and monocytes/macrophages were not seen outside the blood vessels. Vanishing acidity characterized areas of extracellular matrix degradation ("preresorptive layers"), from where the dismantled matrix components diffused out. Leached-out material stained in an identical manner to intact cartilage matrix. It was detected in the stroma and inside capillaries and associated downstream veins. We conclude that the delicate VCs are excavated by endothelial sprouts and fibroblasts, whilst chondroclasts are specialized to remove high volumes of mineralized cartilage. VCs leading into permanent cartilage can be formed by corrosion or inclusion, but most VCs comprise segments that have developed in either of these ways. Anat Rec, 300:1067-1082, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Reference vectors in economic choice

    Directory of Open Access Journals (Sweden)

    Teycir Abdelghani GOUCHA

    2013-07-01

    Full Text Available In this paper the introduction of notion of reference vector paves the way for a combination of classical and social approaches in the framework of referential preferences given by matrix groups. It is shown that individual demand issue from rational decision does not depend on that reference.

  17. Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2012-11-29

    The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large. This paper considers a model class of second order, linear, parametric, elliptic PDEs on a bounded domain D with diffusion coefficients depending on the parameters in an affine manner. For such models, it was shown in [9, 10] that under very weak assumptions on the diffusion coefficients, the entire family of solutions to such equations can be simultaneously approximated in the Hilbert space V = H0 1(D) by multivariate sparse polynomials in the parameter vector y with a controlled number N of terms. The convergence rate in terms of N does not depend on the number of parameters in V, which may be arbitrarily large or countably infinite, thereby breaking the curse of dimensionality. However, these approximation results do not describe the concrete construction of these polynomial expansions, and should therefore rather be viewed as benchmark for the convergence analysis of numerical methods. The present paper presents an adaptive numerical algorithm for constructing a sequence of sparse polynomials that is proved to converge toward the solution with the optimal benchmark rate. Numerical experiments are presented in large parameter dimension, which confirm the effectiveness of the adaptive approach. © 2012 EDP Sciences, SMAI.

  18. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Science.gov (United States)

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  19. Charged and neutral current production of Δ(1236)

    International Nuclear Information System (INIS)

    Koerner, J.G.; Kobayashi, T.; Avilez, C.

    1977-04-01

    Based on a hybrid quark model approach previously developed by us which employs a q 2 -continuation in terms of generalized meson dominance form factors we study the weak production of the isobar Δ(1236). First we demonstrate that our model is in agreement with the Argonne data on charged current production of the Δ. We then study neutral current Δ-production using four different gauge models, namely the standard Weinberg-Salam model, a vector-like model with six quarks, a five quark model due to Achiman, Koller and Walsh and a variant of the Guersey-Sikivie model. We find that the results for the differential cross-section in the forward region are very sensitive to the structure of the weak neutral current and suggest that measurements in this region constitute a stringent test of weak interaction models. We also calculate the density matrix elements measurable from decay correllations. The density matrix elements are not so sensitive to the models containing some axial contribution whereas the vector-like model shows a behaviour quite distinct from the others. (orig.) [de

  20. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.