WorldWideScience

Sample records for high-dimensional low-rank matrices

  1. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  2. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  3. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra

    2018-04-15

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  4. Weighted Low-Rank Approximation of Matrices and Background Modeling

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2018-01-01

    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.

  5. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  6. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  7. Application of Parallel Hierarchical Matrices and Low-Rank Tensors in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-03-12

    Part 1: Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro. Part 2: Low-rank Tucker tensor methods in spatial statistics

  8. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  9. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  10. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  11. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  12. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    Science.gov (United States)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  13. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  14. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  15. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  16. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  17. Low-rank coal research

    Energy Technology Data Exchange (ETDEWEB)

    Weber, G. F.; Laudal, D. L.

    1989-01-01

    This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).

  18. Low rank magnetic resonance fingerprinting.

    Science.gov (United States)

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  19. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  20. Global sensitivity analysis using low-rank tensor approximations

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.

  1. Batched Tile Low-Rank GEMM on GPUs

    KAUST Repository

    Charara, Ali

    2018-02-01

    Dense General Matrix-Matrix (GEMM) multiplication is a core operation of the Basic Linear Algebra Subroutines (BLAS) library, and therefore, often resides at the bottom of the traditional software stack for most of the scientific applications. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract the hardware performance. With the emergence of big data applications involving large data-sparse, hierarchically low-rank matrices, the off-diagonal tiles can be compressed to reduce the algorithmic complexity and the memory footprint. The resulting tile low-rank (TLR) data format is composed of small data structures, which retains the most significant information for each tile. However, to operate on low-rank tiles, a new GEMM operation and its corresponding API have to be designed on GPUs so that it can exploit the data sparsity structure of the matrix while leveraging the underlying TLR compression format. The main idea consists in aggregating all operations onto a single kernel launch to compensate for their low arithmetic intensities and to mitigate the data transfer overhead on GPUs. The new TLR GEMM kernel outperforms the cuBLAS dense batched GEMM by more than an order of magnitude and creates new opportunities for TLR advance algorithms.

  2. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  3. Texture Repairing by Unified Low Rank Optimization

    Institute of Scientific and Technical Information of China (English)

    Xiao Liang; Xiang Ren; Zhengdong Zhang; Yi Ma

    2016-01-01

    In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.

  4. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  5. Low-rank quadratic semidefinite programming

    KAUST Repository

    Yuan, Ganzhao

    2013-04-01

    Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.

  6. Low-rank quadratic semidefinite programming

    KAUST Repository

    Yuan, Ganzhao; Zhang, Zhenjie; Ghanem, Bernard; Hao, Zhifeng

    2013-01-01

    Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.

  7. Efficient Low Rank Tensor Ring Completion

    OpenAIRE

    Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin

    2017-01-01

    Using the matrix product state (MPS) representation of the recently proposed tensor ring decompositions, in this paper we propose a tensor completion algorithm, which is an alternating minimization algorithm that alternates over the factors in the MPS representation. This development is motivated in part by the success of matrix completion algorithms that alternate over the (low-rank) factors. In this paper, we propose a spectral initialization for the tensor ring completion algorithm and ana...

  8. Tensor Factorization for Low-Rank Tensor Completion.

    Science.gov (United States)

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  9. Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models

    KAUST Repository

    El Gharamti, Mohamad

    2010-12-01

    Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.

  10. Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models

    KAUST Repository

    El Gharamti, Mohamad

    2010-01-01

    Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.

  11. Tile Low Rank Cholesky Factorization for Climate/Weather Modeling Applications on Manycore Architectures

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Keyes, David E.

    2017-01-01

    Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.

  12. Tile Low Rank Cholesky Factorization for Climate/Weather Modeling Applications on Manycore Architectures

    KAUST Repository

    Akbudak, Kadir

    2017-05-11

    Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.

  13. Low-rank driving in quantum systems

    International Nuclear Information System (INIS)

    Burkey, R.S.

    1989-01-01

    A new property of quantum systems called low-rank driving is introduced. Numerous simplifications in the solution of the time-dependent Schroedinger equation are pointed out for systems having this property. These simplifications are in the areas of finding eigenvalues, taking the Laplace transform, converting Schroedinger's equation to an integral form, discretizing the continuum, generalizing the Weisskopf-Wigner approximation, band-diagonalizing the Hamiltonian, finding new exact solutions to Schroedinger's equation, and so forth. The principal physical application considered is the phenomenon of coherent populations-trapping in continuum-continuum interactions

  14. Beyond Low Rank: A Data-Adaptive Tensor Completion Method

    OpenAIRE

    Zhang, Lei; Wei, Wei; Shi, Qinfeng; Shen, Chunhua; Hengel, Anton van den; Zhang, Yanning

    2017-01-01

    Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, however, this approach is confronted with two challenging problems, namely (1) tensor rank determination; (2) handling real tensor data which only approximately fulfils the low-rank requirement. To address these two issues, we develop a data-adaptive tensor completion model which explicitly represents both the low-rank and non-low-rank structures in a latent tensor. Representing the no...

  15. Efficient tensor completion for color image and video recovery: Low-rank tensor train

    OpenAIRE

    Bengua, Johann A.; Phien, Ho N.; Tuan, Hoang D.; Do, Minh N.

    2016-01-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via tensor tra...

  16. Proceedings of the sixteenth biennial low-rank fuels symposium

    International Nuclear Information System (INIS)

    1991-01-01

    Low-rank coals represent a major energy resource for the world. The Low-Rank Fuels Symposium, building on the traditions established by the Lignite Symposium, focuses on the key opportunities for this resource. This conference offers a forum for leaders from industry, government, and academia to gather to share current information on the opportunities represented by low-rank coals. In the United States and throughout the world, the utility industry is the primary user of low-rank coals. As such, current experiences and future opportunities for new technologies in this industry were the primary focuses of the symposium

  17. Proceedings of the sixteenth biennial low-rank fuels symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    Low-rank coals represent a major energy resource for the world. The Low-Rank Fuels Symposium, building on the traditions established by the Lignite Symposium, focuses on the key opportunities for this resource. This conference offers a forum for leaders from industry, government, and academia to gather to share current information on the opportunities represented by low-rank coals. In the United States and throughout the world, the utility industry is the primary user of low-rank coals. As such, current experiences and future opportunities for new technologies in this industry were the primary focuses of the symposium.

  18. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  19. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    Science.gov (United States)

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  20. On low-rank updates to the singular value and Tucker decompositions

    Energy Technology Data Exchange (ETDEWEB)

    O' Hara, M J

    2009-10-06

    The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.

  1. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    International Nuclear Information System (INIS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)

  2. A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.

    Science.gov (United States)

    Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho

    2014-10-01

    Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.

  3. Low-rank Kalman filtering for efficient state estimation of subsurface advective contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2012-04-01

    Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.

  4. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  5. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  6. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  7. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  8. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu; Liu, Si; Ahuja, Narendra; Yang, Ming-Hsuan; Ghanem, Bernard

    2014-01-01

    and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25

  9. Sampling and Low-Rank Tensor Approximation of the Response Surface

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann Georg; El-Moselhy, Tarek A.

    2013-01-01

    Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.

  10. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  11. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.

  12. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.

  13. A Generalized Robust Minimization Framework for Low-Rank Matrix Recovery

    Directory of Open Access Journals (Sweden)

    Wen-Ze Shao

    2014-01-01

    Full Text Available This paper considers the problem of recovering low-rank matrices which are heavily corrupted by outliers or large errors. To improve the robustness of existing recovery methods, the problem is solved by formulating it as a generalized nonsmooth nonconvex minimization functional via exploiting the Schatten p-norm (0 < p ≤1 and Lq(0 < q ≤1 seminorm. Two numerical algorithms are provided based on the augmented Lagrange multiplier (ALM and accelerated proximal gradient (APG methods as well as efficient root-finder strategies. Experimental results demonstrate that the proposed generalized approach is more inclusive and effective compared with state-of-the-art methods, either convex or nonconvex.

  14. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  15. Low-Rank Linear Dynamical Systems for Motor Imagery EEG.

    Science.gov (United States)

    Zhang, Wenchang; Sun, Fuchun; Tan, Chuanqi; Liu, Shaobo

    2016-01-01

    The common spatial pattern (CSP) and other spatiospectral feature extraction methods have become the most effective and successful approaches to solve the problem of motor imagery electroencephalography (MI-EEG) pattern recognition from multichannel neural activity in recent years. However, these methods need a lot of preprocessing and postprocessing such as filtering, demean, and spatiospectral feature fusion, which influence the classification accuracy easily. In this paper, we utilize linear dynamical systems (LDSs) for EEG signals feature extraction and classification. LDSs model has lots of advantages such as simultaneous spatial and temporal feature matrix generation, free of preprocessing or postprocessing, and low cost. Furthermore, a low-rank matrix decomposition approach is introduced to get rid of noise and resting state component in order to improve the robustness of the system. Then, we propose a low-rank LDSs algorithm to decompose feature subspace of LDSs on finite Grassmannian and obtain a better performance. Extensive experiments are carried out on public dataset from "BCI Competition III Dataset IVa" and "BCI Competition IV Database 2a." The results show that our proposed three methods yield higher accuracies compared with prevailing approaches such as CSP and CSSP.

  16. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    Science.gov (United States)

    Wang, Yang; Wu, Lin

    2018-07-01

    Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  18. Direct liquefaction of low-rank coals under mild conditions

    Energy Technology Data Exchange (ETDEWEB)

    Braun, N.; Rinaldi, R. [Max-Planck-Institut fuer Kohlenforschung, Muelheim an der Ruhr (Germany)

    2013-11-01

    Due to decreasing of petroleum reserves, direct coal liquefaction is attracting renewed interest as an alternative process to produce liquid fuels. The combination of hydrogen peroxide and coal is not a new one. In the early 1980, Vasilakos and Clinton described a procedure for desulfurization by leaching coal with solutions of sulphuric acid/H{sub 2}O{sub 2}. But so far, H{sub 2}O{sub 2} has never been ascribed a major role in coal liquefaction. Herein, we describe a novel approach for liquefying low-rank coals using a solution of H{sub 2}O{sub 2} in presence of a soluble non-transition metal catalyst. (orig.)

  19. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  20. Fast Low-Rank Shared Dictionary Learning for Image Classification.

    Science.gov (United States)

    Tiep Huu Vu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.

  1. Catalytic briquettes from low-rank coal for NO reduction

    Energy Technology Data Exchange (ETDEWEB)

    A. Boyano; M.E. Galvez; R. Moliner; M.J. Lazaro [Instituto de Carboquimica, CSIC, Zaragoza (Spain)

    2007-07-01

    The briquetting is one of the most ancient and widespread techniques of coal agglomeration which is nowadays becoming useless for combustion home applications. However, the social increasing interest in environmental protection opens new applications to this technique, especially in developed countries. In this work, a series of catalytic briquettes were prepared from low-rank Spanish coal and commercial pitch by means of a pressure agglomeration method. After that, they were cured in air and doped by equilibrium impregnation with vanadium compounds. Preparation conditions (especially those of activation and oxidizing process) were changed to study their effects on catalytic behaviour. Catalytic briquettes showed a relative high NO conversion at low temperatures in all cases, however, a strong relation between the preparation process and the reached NO conversion was observed. Preparation procedure has an effect not only on the NO reduction efficiency but also on the mechanical strength of the briquettes as a consequence of the structural and chemical changes carried out during the activation and oxidation procedures. Generally speaking mechanical resistance is enhanced by an optimal porous volume and the creation of new carboxyl groups on surface. Just on the contrary, NO reduction is promoted by high microporous structures and higher amounts of surface oxygen groups. Both facts force to find an optimum point in the preparation produce which will depend on the application. 24 refs., 4 figs., 3 tabs.

  2. Carbon-free hydrogen production from low rank coal

    Science.gov (United States)

    Aziz, Muhammad; Oda, Takuya; Kashiwagi, Takao

    2018-02-01

    Novel carbon-free integrated system of hydrogen production and storage from low rank coal is proposed and evaluated. To measure the optimum energy efficiency, two different systems employing different chemical looping technologies are modeled. The first integrated system consists of coal drying, gasification, syngas chemical looping, and hydrogenation. On the other hand, the second system combines coal drying, coal direct chemical looping, and hydrogenation. In addition, in order to cover the consumed electricity and recover the energy, combined cycle is adopted as addition module for power generation. The objective of the study is to find the best system having the highest performance in terms of total energy efficiency, including hydrogen production efficiency and power generation efficiency. To achieve a thorough energy/heat circulation throughout each module and the whole integrated system, enhanced process integration technology is employed. It basically incorporates two core basic technologies: exergy recovery and process integration. Several operating parameters including target moisture content in drying module, operating pressure in chemical looping module, are observed in terms of their influence to energy efficiency. From process modeling and calculation, two integrated systems can realize high total energy efficiency, higher than 60%. However, the system employing coal direct chemical looping represents higher energy efficiency, including hydrogen production and power generation, which is about 83%. In addition, optimum target moisture content in drying and operating pressure in chemical looping also have been defined.

  3. Enhancing Low-Rank Subspace Clustering by Manifold Regularization.

    Science.gov (United States)

    Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben

    2014-07-25

    Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.

  4. Assessment of low-rank (LRC) drying technologies

    International Nuclear Information System (INIS)

    Willson, W.G.; Young, B.C.; Irwinj, W.

    1992-01-01

    This paper reports that low-rank coals (LRCs), brown, lignitic, and subbituminous coals, represent nearly half of the estimated coal resources in the world. In many of the developing nations, LRCs are the only source of low-cost energy. LRCs are geologically younger than higher-rank bituminous coals and are typically present in thick seams with less cover (overburden) than bituminous coals, making them recoverable by low-cost strip mining. Current pit-head coal prices for LRCs range from a low of around $0.25 per MM Btus for subbituminous coals from the USA's Powder River Basin, to highs of around $1,00 for those that are more costly to mine. On the other hand, the pit-head price of bituminous coals in the USA range from a low of around $1 to over $2 per MM Btu. Unfortunately, this differential in favor of LRC is more than offset in distant markers where, until now, it has been considered a nuisance. Often less than half of its weight is combustible, the rest being water and ash. Thus the cost of hauling it any distance at all in its untreated dry bulk form is prohibitive. However, from a utilization aspect, LRCs have a lower fuel ration (fixed carbon to volatile matter) and are typically an order of magnitude more reactive than bituminous coals. Many LRCs, including the enormous reserves in Alaska, Australia, and Indonesia, also have extremely low sulfur contents of only a few tenths of a percent. Low mining costs, high reactivity, and extremely low sulfur content would make these coals premium fuel were it not for their high moisture levels, which range from around 25% w/w to over 60% w/w. High moisture creates a mistaken perception, among major coal importers, of inferior quality, and the many positive features of LRCs are overlooked

  5. Low-rank coal research. Quarterly report, January--March 1990

    Energy Technology Data Exchange (ETDEWEB)

    1990-08-01

    This document contains several quarterly progress reports for low-rank coal research that was performed from January-March 1990. Reports in Control Technology and Coal Preparation Research are in Flue Gas Cleanup, Waste Management, and Regional Energy Policy Program for the Northern Great Plains. Reports in Advanced Research and Technology Development are presented in Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Reports in Combustion Research cover Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Coal Fuels, Diesel Utilization of Low-Rank Coals, and Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications. Liquefaction Research is reported in Low-Rank Coal Direct Liquefaction. Gasification Research progress is discussed for Production of Hydrogen and By-Products from Coal and for Chemistry of Sulfur Removal in Mild Gas.

  6. A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark

    Energy Technology Data Exchange (ETDEWEB)

    Gittens, Alex; Kottalam, Jey; Yang, Jiyan; Ringenburg, Michael, F.; Chhugani, Jatin; Racah, Evan; Singh, Mohitdeep; Yao, Yushu; Fischer, Curt; Ruebel, Oliver; Bowen, Benjamin; Lewis, Norman, G.; Mahoney, Michael, W.; Krishnamurthy, Venkat; Prabhat, Mr

    2017-07-27

    We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with the fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.

  7. Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.

    Science.gov (United States)

    Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N

    2017-05-01

    This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.

  8. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    Science.gov (United States)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  9. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    International Nuclear Information System (INIS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-01-01

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  10. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  11. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  12. Synfuels from low-rank coals at the Great Plains Gasification Plant

    International Nuclear Information System (INIS)

    Pollock, D.

    1992-01-01

    This presentation focuses on the use of low rank coals to form synfuels. A worldwide abundance of low rank coals exists. Large deposits in the United States are located in Texas and North Dakota. Low rank coal deposits are also found in Europe, India and Australia. Because of the high moisture content of lignite ranging from 30% to 60% or higher, it is usually utilized in mine mouth applications. Lignite is generally very reactive and contains varying amounts of ash and sulfur. Typical uses for lignite are listed. A commercial application using lignite as feedstock to a synfuels plant, Dakota Gasification Company's Great Plains Gasification Plant, is discussed

  13. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    KAUST Repository

    Zhang, Zhendong; Liu, Yike; Alkhalifah, Tariq Ali; Wu, Zedong

    2017-01-01

    efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space

  14. Low-rank coal research, Task 5.1. Topical report, April 1986--December 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This document is a topical progress report for Low-Rank Coal Research performed April 1986 - December 1992. Control Technology and Coal Preparation Research is described for Flue Gas Cleanup, Waste Management, Regional Energy Policy Program for the Northern Great Plains, and Hot-Gas Cleanup. Advanced Research and Technology Development was conducted on Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Combustion Research is described for Atmospheric Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Fuels (completed 10/31/90), Diesel Utilization of Low-Rank Coals (completed 12/31/90), Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications (completed 10/31/90), Nitrous Oxide Emission, and Pressurized Fluidized-Bed Combustion. Liquefaction Research in Low-Rank Coal Direct Liquefaction is discussed. Gasification Research was conducted in Production of Hydrogen and By-Products from Coals and in Sulfur Forms in Coal.

  15. Clean utilization of low-rank coals for low-cost power generation

    International Nuclear Information System (INIS)

    Sondreal, E.A.

    1992-01-01

    Despite the unique utilization problems of low-rank coals, the ten US steam electric plants having the lowest operating cost in 1990 were all fueled on either lignite or subbituminous coal. Ash deposition problems, which have been a major barrier to sustaining high load on US boilers burning high-sodium low-rank coals, have been substantially reduced by improvements in coal selection, boiler design, on-line cleaning, operating conditions, and additives. Advantages of low-rank coals in advanced systems are their noncaking behavior when heated, their high reactivity allowing more complete reaction at lower temperatures, and the low sulfur content of selected deposits. The principal barrier issues are the high-temperature behavior of ash and volatile alkali derived from the coal-bound sodium found in some low-rank coals. Successful upgrading of low-rank coals requires that the product be both stable and suitable for end use in conventional and advanced systems. Coal-water fuel produced by hydrothermal processing of high-moisture low-rank coal meets these criteria, whereas most dry products from drying or carbonizing in hot gas tend to create dust and spontaneous ignition problems unless coated, agglomerated, briquetted, or afforded special handling

  16. Low-rank coal study. Volume 4. Regulatory, environmental, and market analyses

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    The regulatory, environmental, and market constraints to development of US low-rank coal resources are analyzed. Government-imposed environmental and regulatory requirements are among the most important factors that determine the markets for low-rank coal and the technology used in the extraction, delivery, and utilization systems. Both state and federal controls are examined, in light of available data on impacts and effluents associated with major low-rank coal development efforts. The market analysis examines both the penetration of existing markets by low-rank coal and the evolution of potential markets in the future. The electric utility industry consumes about 99 percent of the total low-rank coal production. This use in utility boilers rose dramatically in the 1970's and is expected to continue to grow rapidly. In the late 1980's and 1990's, industrial direct use of low-rank coal and the production of synthetic fuels are expected to start growing as major new markets.

  17. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  18. Low-ranking female Japanese macaques make efforts for social grooming.

    Science.gov (United States)

    Kurihara, Yosuke

    2016-04-01

    Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates.

  19. Low-rank coal study : national needs for resource development. Volume 2. Resource characterization

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    Comprehensive data are presented on the quantity, quality, and distribution of low-rank coal (subbituminous and lignite) deposits in the United States. The major lignite-bearing areas are the Fort Union Region and the Gulf Lignite Region, with the predominant strippable reserves being in the states of North Dakota, Montana, and Texas. The largest subbituminous coal deposits are in the Powder River Region of Montana and Wyoming, The San Juan Basin of New Mexico, and in Northern Alaska. For each of the low-rank coal-bearing regions, descriptions are provided of the geology; strippable reserves; active and planned mines; classification of identified resources by depth, seam thickness, sulfur content, and ash content; overburden characteristics; aquifers; and coal properties and characteristics. Low-rank coals are distinguished from bituminous coals by unique chemical and physical properties that affect their behavior in extraction, utilization, or conversion processes. The most characteristic properties of the organic fraction of low-rank coals are the high inherent moisture and oxygen contents, and the correspondingly low heating value. Mineral matter (ash) contents and compositions of all coals are highly variable; however, low-rank coals tend to have a higher proportion of the alkali components CaO, MgO, and Na/sub 2/O. About 90% of the reserve base of US low-rank coal has less than one percent sulfur. Water resources in the major low-rank coal-bearing regions tend to have highly seasonal availabilities. Some areas appear to have ample water resources to support major new coal projects; in other areas such as Texas, water supplies may be constraining factor on development.

  20. Low-ranking female Japanese macaques make efforts for social grooming

    Science.gov (United States)

    Kurihara, Yosuke

    2016-01-01

    Abstract Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates. PMID:29491896

  1. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  2. Low-rank coal study: national needs for resource development. Volume 3. Technology evaluation

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    Technologies applicable to the development and use of low-rank coals are analyzed in order to identify specific needs for research, development, and demonstration (RD and D). Major sections of the report address the following technologies: extraction; transportation; preparation, handling and storage; conventional combustion and environmental control technology; gasification; liquefaction; and pyrolysis. Each of these sections contains an introduction and summary of the key issues with regard to subbituminous coal and lignite; description of all relevant technology, both existing and under development; a description of related environmental control technology; an evaluation of the effects of low-rank coal properties on the technology; and summaries of current commercial status of the technology and/or current RD and D projects relevant to low-rank coals.

  3. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  4. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander; Nowak, Wolfgang

    2014-01-01

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1

  5. Low-rank coal research: Volume 2, Advanced research and technology development: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mann, M.D.; Swanson, M.L.; Benson, S.A.; Radonovich, L.; Steadman, E.N.; Sweeny, P.G.; McCollor, D.P.; Kleesattel, D.; Grow, D.; Falcone, S.K.

    1987-04-01

    Volume II contains articles on advanced combustion phenomena, combustion inorganic transformation; coal/char reactivity; liquefaction reactivity of low-rank coals, gasification ash and slag characterization, and fine particulate emissions. These articles have been entered individually into EDB and ERA. (LTN)

  6. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic; Nouy, Anthony

    2017-01-01

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  7. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  8. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander

    2014-05-04

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1

  9. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  10. Pyrolysis characteristics and kinetics of low rank coals by distributed activation energy model

    International Nuclear Information System (INIS)

    Song, Huijuan; Liu, Guangrui; Wu, Jinhu

    2016-01-01

    Highlights: • Types of carbon in coal structure were investigated by curve-fitted "1"3C NMR spectra. • The work related pyrolysis characteristics and kinetics with coal structure. • Pyrolysis kinetics of low rank coals were studied by DAEM with Miura integral method. • DAEM could supply accurate extrapolations under relatively higher heating rates. - Abstract: The work was conducted to investigate pyrolysis characteristics and kinetics of low rank coals relating with coal structure by thermogravimetric analysis (TGA), the distributed activation energy model (DAEM) and solid-state "1"3C Nuclear Magnetic Resonance (NMR). Four low rank coals selected from different mines in China were studied in the paper. TGA was carried out with a non-isothermal temperature program in N_2 at the heating rate of 5, 10, 20 and 30 °C/min to estimate pyrolysis processes of coal samples. The results showed that corresponding characteristic temperatures and the maximum mass loss rates increased as heating rate increased. Pyrolysis kinetics parameters were investigated by the DAEM using Miura integral method. The DAEM was accurate verified by the good fit between the experimental and calculated curves of conversion degree x at the selected heating rates and relatively higher heating rates. The average activation energy was 331 kJ/mol (coal NM), 298 kJ/mol (coal NX), 302 kJ/mol (coal HLJ) and 196 kJ/mol (coal SD), respectively. The curve-fitting analysis of "1"3C NMR spectra was performed to characterize chemical structures of low rank coals. The results showed that various types of carbon functional groups with different relative contents existed in coal structure. The work indicated that pyrolysis characteristics and kinetics of low rank coals were closely associated with their chemical structures.

  11. Video deraining and desnowing using temporal correlation and low-rank matrix completion.

    Science.gov (United States)

    Kim, Jin-Hwan; Sim, Jae-Young; Kim, Chang-Su

    2015-09-01

    A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.

  12. Low temperature oxidation and spontaneous combustion characteristics of upgraded low rank coal

    Energy Technology Data Exchange (ETDEWEB)

    Choi, H.K.; Kim, S.D.; Yoo, J.H.; Chun, D.H.; Rhim, Y.J.; Lee, S.H. [Korea Institute of Energy Research, Daejeon (Korea, Republic of)

    2013-07-01

    The low temperature oxidation and spontaneous combustion characteristics of dried coal produced from low rank coal using the upgraded brown coal (UBC) process were investigated. To this end, proximate properties, crossing-point temperature (CPT), and isothermal oxidation characteristics of the coal were analyzed. The isothermal oxidation characteristics were estimated by considering the formation rates of CO and CO{sub 2} at low temperatures. The upgraded low rank coal had higher heating values than the raw coal. It also had less susceptibility to low temperature oxidation and spontaneous combustion. This seemed to result from the coating of the asphalt on the surface of the coal, which suppressed the active functional groups from reacting with oxygen in the air. The increasing upgrading pressure negatively affected the low temperature oxidation and spontaneous combustion.

  13. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    Science.gov (United States)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  14. Reweighted Low-Rank Tensor Completion and its Applications in Video Recovery

    OpenAIRE

    M., Baburaj; George, Sudhish N.

    2016-01-01

    This paper focus on recovering multi-dimensional data called tensor from randomly corrupted incomplete observation. Inspired by reweighted $l_1$ norm minimization for sparsity enhancement, this paper proposes a reweighted singular value enhancement scheme to improve tensor low tubular rank in the tensor completion process. An efficient iterative decomposition scheme based on t-SVD is proposed which improves low-rank signal recovery significantly. The effectiveness of the proposed method is es...

  15. A Class of Weighted Low Rank Approximation of the Positive Semidefinite Hankel Matrix

    Directory of Open Access Journals (Sweden)

    Jianchao Bai

    2015-01-01

    Full Text Available We consider the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, we firstly transform the problem into an unconstrained optimization problem and then use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem. Numerical examples illustrate that the new method is feasible and effective.

  16. The application of low-rank and sparse decomposition method in the field of climatology

    Science.gov (United States)

    Gupta, Nitika; Bhaskaran, Prasad K.

    2018-04-01

    The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.

  17. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  18. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Task 27 -- Alaskan low-rank coal-water fuel demonstration project

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    Development of coal-water-fuel (CWF) technology has to-date been predicated on the use of high-rank bituminous coal only, and until now the high inherent moisture content of low-rank coal has precluded its use for CWF production. The unique feature of the Alaskan project is the integration of hot-water-drying (HWD) into CWF technology as a beneficiation process. Hot-water-drying is an EERC developed technology unavailable to the competition that allows the range of CWF feedstock to be extended to low-rank coals. The primary objective of the Alaskan Project, is to promote interest in the CWF marketplace by demonstrating the commercial viability of low-rank coal-water-fuel (LRCWF). While commercialization plans cannot be finalized until the implementation and results of the Alaskan LRCWF Project are known and evaluated, this report has been prepared to specifically address issues concerning business objectives for the project, and outline a market development plan for meeting those objectives.

  20. Application of House of Quality in evaluation of low rank coal pyrolysis polygeneration technologies

    International Nuclear Information System (INIS)

    Yang, Qingchun; Yang, Siyu; Qian, Yu; Kraslawski, Andrzej

    2015-01-01

    Highlights: • House of Quality method was used for assessment of coal pyrolysis polygeneration technologies. • Low rank coal pyrolysis polygeneration processes based on solid heat carrier, moving bed and fluidized bed were evaluated. • Technical and environmental criteria for the assessment of technologies were used. • Low rank coal pyrolysis polygeneration process based on a fluidized bed is the best option. - Abstract: Increasing interest in low rank coal pyrolysis (LRCP) polygeneration has resulted in the development of a number of different technologies and approaches. Evaluation of LRCP processes should include not only conventional efficiency, economic and environmental assessments, but also take into consideration sustainability aspects. As a result of the many complex variables involved, selection of the most suitable LRCP technology becomes a challenging task. This paper applies a House of Quality method in comprehensive evaluation of LRCP. A multi-level evaluation model addressing 19 customer needs and analyzing 10 technical characteristics is developed. Using the evaluation model, the paper evaluates three LRCP technologies, which are based on solid heat carrier, moving bed and fluidized bed concepts, respectively. The results show that the three most important customer needs are level of technical maturity, wastewater emissions, and internal rate of return. The three most important technical characteristics are production costs, investment costs and waste emissions. On the basis of the conducted analysis, it is concluded that the LRCP process utilizing a fluidized bed approach is the optimal alternative studied

  1. Fabric defect detection based on visual saliency using deep feature and low-rank recovery

    Science.gov (United States)

    Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan

    2018-04-01

    Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.

  2. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    Science.gov (United States)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  3. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-06-01

    Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

  4. OCT despeckling via weighted nuclear norm constrained non-local low-rank representation

    Science.gov (United States)

    Tang, Chang; Zheng, Xiao; Cao, Lijuan

    2017-10-01

    As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.

  5. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  6. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  7. The optimized expansion based low-rank method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2014-03-01

    Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.

  8. The role of IGCC technology in power generation using low-rank coal

    Energy Technology Data Exchange (ETDEWEB)

    Juangjandee, Pipat

    2010-09-15

    Based on basic test results on the gasification rate of Mae Moh lignite coal. It was found that an IDGCC power plant is the most suitable for Mae Moh lignite. In conclusion, the future of an IDGCC power plant using low-rank coal in Mae Moh mine would hinge on the strictness of future air pollution control regulations including green-house gas emission and the constraint of Thailand's foreign currency reserves needed to import fuels, in addition to economic consideration. If and when it is necessary to overcome these obstacles, IGCC is one variable alternative power generation must consider.

  9. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander; Nowak, Wolfgang

    2014-01-01

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.

  10. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  11. Matrix completion via a low rank factorization model and an Augmented Lagrangean Succesive Overrelaxation Algorithm

    Directory of Open Access Journals (Sweden)

    Hugo Lara

    2014-12-01

    Full Text Available The matrix completion problem (MC has been approximated by using the nuclear norm relaxation. Some algorithms based on this strategy require the computationally expensive singular value decomposition (SVD at each iteration. One way to avoid SVD calculations is to use alternating methods, which pursue the completion through matrix factorization with a low rank condition. In this work an augmented Lagrangean-type alternating algorithm is proposed. The new algorithm uses duality information to define the iterations, in contrast to the solely primal LMaFit algorithm, which employs a Successive Over Relaxation scheme. The convergence result is studied. Some numerical experiments are given to compare numerical performance of both proposals.

  12. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander; Nowak, Wolfgang

    2014-01-01

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.

  13. Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods

    International Nuclear Information System (INIS)

    Brown, J.; Brune, P.

    2013-01-01

    Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)

  14. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  15. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander

    2014-01-08

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.

  16. Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.

  17. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  18. Modeling of pseudoacoustic P-waves in orthorhombic media with a low-rank approximation

    KAUST Repository

    Song, Xiaolei

    2013-06-04

    Wavefield extrapolation in pseudoacoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We use the dispersion relation for scalar wave propagation in pseudoacoustic orthorhombic media to model acoustic wavefields. The wavenumber-domain application of the Laplacian operator allows us to propagate the P-waves exclusively, without imposing any conditions on the parameter range of stability. It also allows us to avoid dispersion artifacts commonly associated with evaluating the Laplacian operator in space domain using practical finite-difference stencils. To handle the corresponding space-wavenumber mixed-domain operator, we apply the low-rank approximation approach. Considering the number of parameters necessary to describe orthorhombic anisotropy, the low-rank approach yields space-wavenumber decomposition of the extrapolator operator that is dependent on space location regardless of the parameters, a feature necessary for orthorhombic anisotropy. Numerical experiments that the proposed wavefield extrapolator is accurate and practically free of dispersion. Furthermore, there is no coupling of qSv and qP waves because we use the analytical dispersion solution corresponding to the P-wave.

  19. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    Science.gov (United States)

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  20. Promoting effect of various biomass ashes on the steam gasification of low-rank coal

    International Nuclear Information System (INIS)

    Rizkiana, Jenny; Guan, Guoqing; Widayatno, Wahyu Bambang; Hao, Xiaogang; Li, Xiumin; Huang, Wei; Abudula, Abuliti

    2014-01-01

    Highlights: • Biomass ash was utilized to promote gasification of low rank coal. • Promoting effect of biomass ash highly depended on AAEM content in the ash. • Stability of the ash could be improved by maintaining AAEM amount in the ash. • Different biomass ash could have completely different catalytic activity. - Abstract: Application of biomass ash as a catalyst to improve gasification rate is a promising way for the effective utilization of waste ash as well as for the reduction of cost. Investigation on the catalytic activity of biomass ash to the gasification of low rank coal was performed in details in the present study. Ashes from 3 kinds of biomass, i.e. brown seaweed/BS, eel grass/EG, and rice straw/RS, were separately mixed with coal sample and gasified in a fixed bed downdraft reactor using steam as the gasifying agent. BS and EG ashes enhanced the gas production rate greater than RS ash. Higher catalytic activity of BS or EG ash was mainly attributed to the higher content of alkali and alkaline earth metal (AAEM) and lower content of silica in it. Higher content of silica in the RS ash was identified to have inhibiting effect for the steam gasification of coal. Stable catalytic activity was remained when the amount of AAEM in the regenerated ash was maintained as that of the original one

  1. A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.

    Science.gov (United States)

    Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong

    2017-10-01

    There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Relationship between Particle Size Distribution of Low-Rank Pulverized Coal and Power Plant Performance

    Directory of Open Access Journals (Sweden)

    Rajive Ganguli

    2012-01-01

    Full Text Available The impact of particle size distribution (PSD of pulverized, low rank high volatile content Alaska coal on combustion related power plant performance was studied in a series of field scale tests. Performance was gauged through efficiency (ratio of megawatt generated to energy consumed as coal, emissions (SO2, NOx, CO, and carbon content of ash (fly ash and bottom ash. The study revealed that the tested coal could be burned at a grind as coarse as 50% passing 76 microns, with no deleterious impact on power generation and emissions. The PSD’s tested in this study were in the range of 41 to 81 percent passing 76 microns. There was negligible correlation between PSD and the followings factors: efficiency, SO2, NOx, and CO. Additionally, two tests where stack mercury (Hg data was collected, did not demonstrate any real difference in Hg emissions with PSD. The results from the field tests positively impacts pulverized coal power plants that burn low rank high volatile content coals (such as Powder River Basin coal. These plants can potentially reduce in-plant load by grinding the coal less (without impacting plant performance on emissions and efficiency and thereby, increasing their marketability.

  4. SAR Target Recognition via Local Sparse Representation of Multi-Manifold Regularized Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Meiting Yu

    2018-02-01

    Full Text Available The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs is still a challenging problem, e.g., target with configuration variation, different capture orientations, and articulation. To address these problems, this paper presents a new strategy for target recognition. We first propose a low-dimensional representation model via incorporating multi-manifold regularization term into the low-rank matrix factorization framework. Two rules, pairwise similarity and local linearity, are employed for constructing multiple manifold regularization. By alternately optimizing the matrix factorization and manifold selection, the feature representation model can not only acquire the optimal low-rank approximation of original samples, but also capture the intrinsic manifold structure information. Then, to take full advantage of the local structure property of features and further improve the discriminative ability, local sparse representation is proposed for classification. Finally, extensive experiments on moving and stationary target acquisition and recognition (MSTAR database demonstrate the effectiveness of the proposed strategy, including target recognition under EOCs, as well as the capability of small training size.

  5. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  6. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  7. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    International Nuclear Information System (INIS)

    Konakli, Katerina; Sudret, Bruno

    2016-01-01

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input

  8. On predicting student performance using low-rank matrix factorization techniques

    DEFF Research Database (Denmark)

    Lorenzen, Stephan Sloth; Pham, Dang Ninh; Alstrup, Stephen

    2017-01-01

    Predicting the score of a student is one of the important problems in educational data mining. The scores given by an individual student reflect how a student understands and applies the knowledge conveyed in class. A reliable performance prediction enables teachers to identify weak students...... that require remedial support, generate adaptive hints, and improve the learning of students. This work focuses on predicting the score of students in the quiz system of the Clio Online learning platform, the largest Danish supplier of online learning materials, covering 90% of Danish elementary schools...... and the current version of the data set is very sparse, the very low-rank approximation can capture enough information. This means that the simple baseline approach achieves similar performance compared to other advanced methods. In future work, we will restrict the quiz data set, e.g. only including quizzes...

  9. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio

    2018-01-03

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  10. Low rank approximation method for efficient Green's function calculation of dissipative quantum transport

    Science.gov (United States)

    Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann

    2013-06-01

    In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.

  11. On low rank classical groups in string theory, gauge theory and matrix models

    International Nuclear Information System (INIS)

    Intriligator, Ken; Kraus, Per; Ryzhov, Anton V.; Shigemori, Masaki; Vafa, Cumrun

    2004-01-01

    We consider N=1 supersymmetric U(N), SO(N), and Sp(N) gauge theories, with two-index tensor matter and added tree-level superpotential, for general breaking patterns of the gauge group. By considering the string theory realization and geometric transitions, we clarify when glueball superfields should be included and extremized, or rather set to zero; this issue arises for unbroken group factors of low rank. The string theory results, which are equivalent to those of the matrix model, refer to a particular UV completion of the gauge theory, which could differ from conventional gauge theory results by residual instanton effects. Often, however, these effects exhibit miraculous cancellations, and the string theory or matrix model results end up agreeing with standard gauge theory. In particular, these string theory considerations explain and remove some apparent discrepancies between gauge theories and matrix models in the literature

  12. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio; Ibeid, Huda; Keyes, David E.

    2018-01-01

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  13. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  14. Extracellular oxidases and the transformation of solubilised low-rank coal by wood-rot fungi

    Energy Technology Data Exchange (ETDEWEB)

    Ralph, J.P. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Graham, L.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Catcheside, D.E.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences

    1996-12-31

    The involvement of extracellular oxidases in biotransformation of low-rank coal was assessed by correlating the ability of nine white-rot and brown-rot fungi to alter macromolecular material in alkali-solubilised brown coal with the spectrum of oxidases they produce when grown on low-nitrogen medium. The coal fraction used was that soluble at 3.0{<=}pH{<=}6.0 (SWC6 coal). In 15-ml cultures, Gloeophyllum trabeum, Lentinus lepideus and Trametes versicolor produced little or no lignin peroxidase, manganese (Mn) peroxidase or laccase activity and caused no change to SWC6 coal. Ganoderma applanatum and Pycnoporus cinnabarinus also produced no detectable lignin or Mn peroxidases or laccase yet increased the absorbance at 400 nm of SWC6 coal. G. applanatum, which produced veratryl alcohol oxidase, also increased the modal apparent molecular mass. SWC6 coal exposed to Merulius tremellosus and Perenniporia tephropora, which secreted Mn peroxidases and laccase and Phanerochaete chrysosporium, which produced Mn and lignin peroxidases was polymerised but had unchanged or decreased absorbance. In the case of both P. chrysosporium and M. tremellosus, polymerisation of SWC6 coal was most extensive, leading to the formation of a complex insoluble in 100 mM NaOH. Rigidoporus ulmarius, which produced only laccase, both polymerised and reduced the A{sub 400} of SWC6 coal. P. chrysosporium, M. tremellosus and P. tephropora grown in 10-ml cultures produced a spectrum of oxidases similar to that in 15-ml cultures but, in each case, caused more extensive loss of A{sub 400}, and P. chrysosporium depolymerised SWC6 coal. It is concluded that the extracellular oxidases of white-rot fungi can transform low-rank coal macromolecules and that increased oxygen availability in the shallower 10-ml cultures favours catabolism over polymerisation. (orig.)

  15. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    Science.gov (United States)

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  16. Effects of microwave irradiation treatment on physicochemical characteristics of Chinese low-rank coals

    International Nuclear Information System (INIS)

    Ge, Lichao; Zhang, Yanwei; Wang, Zhihua; Zhou, Junhu; Cen, Kefa

    2013-01-01

    Highlights: • Typical Chinese lignites with various ranks are upgraded through microwave. • The pore distribution extends to micropore region, BET area and volume increase. • FTIR show the change of microstructure and improvement in coal rank after upgrading. • Upgraded coals exhibit weak combustion similar to Da Tong bituminous coal. • More evident effects are obtained for raw brown coal with relative lower rank. - Abstract: This study investigates the effects of microwave irradiation treatment on coal composition, pore structure, coal rank, function groups, and combustion characteristics of typical Chinese low-rank coals. Results showed that the upgrading process (microwave irradiation treatment) significantly reduced the coals’ inherent moisture, and increased their calorific value and fixed carbon content. It was also found that the upgrading process generated micropores and increased pore volume and surface area of the coals. Results on the oxygen/carbon ratio parameter indicated that the low-rank coals were upgraded to high-rank coals after the upgrading process, which is in agreement with the findings from Fourier transform infrared spectroscopy. Unstable components in the coal were converted into stable components during the upgrading process. Thermo-gravimetric analysis showed that the combustion processes of upgraded coals were delayed toward the high-temperature region, the ignition and burnout temperatures increased, and the comprehensive combustion parameter decreased. Compared with raw brown coals, the upgraded coals exhibited weak combustion characteristics similar to bituminous coal. The changes in physicochemical characteristics became more notable when processing temperature increased from 130 °C to 160 °C or the rank of raw brown coal was lower. Microwave irradiation treatment could be considered as an effective dewatering and upgrading process

  17. Influence of the hydrothermal dewatering on the combustion characteristics of Chinese low-rank coals

    International Nuclear Information System (INIS)

    Ge, Lichao; Zhang, Yanwei; Xu, Chang; Wang, Zhihua; Zhou, Junhu; Cen, Kefa

    2015-01-01

    This study investigates the influence of hydrothermal dewatering performed at different temperatures on the combustion characteristics of Chinese low-rank coals with different coalification maturities. It was found that the upgrading process significantly decreased the inherent moisture and oxygen content, increased the calorific value and fixed carbon content, and promoted the damage of the hydrophilic oxygen functional groups. The results of oxygen/carbon atomic ratio indicated that the upgrading process converted the low-rank coals near to high-rank coals which can also be gained using the Fourier transform infrared spectroscopy. The thermogravimetric analysis showed that the combustion processes of upgraded coals were delayed toward the high temperature region, and the upgraded coals had higher ignition and burnout temperature. On the other hand, based on the higher average combustion rate and comprehensive combustion parameter, the upgraded coals performed better compared with raw brown coals and the Da Tong bituminous coal. In ignition segment, the activation energy increased after treatment but decreased in the combustion stage. The changes in coal compositions, microstructure, rank, and combustion characteristics were more notable as the temperature in hydrothermal dewatering increased from 250 to 300 °C or coals of lower ranks were used. - Highlights: • Typical Chinese lignites with various ranks are upgraded by hydrothermal dewatering. • Upgraded coals exhibit chemical compositions comparable with that of bituminous coal. • FTIR show the change of microstructure and improvement in coal rank after upgrading. • Upgraded coals exhibit difficulty in ignition but combust easily. • More evident effects are obtained for raw brown coal with relative lower rank.

  18. Low rank factorization of the Coulomb integrals for periodic coupled cluster theory.

    Science.gov (United States)

    Hummel, Felix; Tsatsoulis, Theodoros; Grüneis, Andreas

    2017-03-28

    We study a tensor hypercontraction decomposition of the Coulomb integrals of periodic systems where the integrals are factorized into a contraction of six matrices of which only two are distinct. We find that the Coulomb integrals can be well approximated in this form already with small matrices compared to the number of real space grid points. The cost of computing the matrices scales as O(N 4 ) using a regularized form of the alternating least squares algorithm. The studied factorization of the Coulomb integrals can be exploited to reduce the scaling of the computational cost of expensive tensor contractions appearing in the amplitude equations of coupled cluster methods with respect to system size. We apply the developed methodologies to calculate the adsorption energy of a single water molecule on a hexagonal boron nitride monolayer in a plane wave basis set and periodic boundary conditions.

  19. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  20. Pra Desain Pabrik Substitute Natural Gas (SNG dari Low Rank Coal

    Directory of Open Access Journals (Sweden)

    Asti Permatasari

    2014-09-01

    rendah dan sedang yang sangat banyak, yaitu masing-masing sebesar 2.426,00 juta ton dan 186,00 juta ton. Maka dari itu, pabrik SNG dari low rank coal ini akan didirikan di Kecamatan Ilir Timur, Sumatera Selatan. Rencananya pabrik ini akan didirikan pada tahun 2016 dan siap beroperasi pada tahun 2018. Diperkirakan konsumsi gas alam pada tahun 2018 sebesar 906.599,3 MMSCF sehingga pendirian pabrik yang baru diharapkan dapat menggantikan kebutuhan gas alam sebesar 4% di Indonesia, yaitu sebanyak 36.295,502 MMSCF per tahun atau sebesar 109.986 MMSCFD. Proses pembuatan SNG dari low rank coal terdiri dari empat proses utama, yaitu coal preparation, gasifikasi, gas cleaning, dan metanasi. Dari analisa perhitungan ekonomi didapat Investasi 823.947.924 USD, IRR sebesar 13,12%, POT selama 5 tahun, dan BEP sebesar 68,55%.

  1. A Novel Fixed Low-Rank Constrained EEG Spatial Filter Estimation with Application to Movie-Induced Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Ken Yano

    2016-01-01

    Full Text Available This paper proposes a novel fixed low-rank spatial filter estimation for brain computer interface (BCI systems with an application that recognizes emotions elicited by movies. The proposed approach unifies such tasks as feature extraction, feature selection, and classification, which are often independently tackled in a “bottom-up” manner, under a regularized loss minimization problem. The loss function is explicitly derived from the conventional BCI approach and solves its minimization by optimization with a nonconvex fixed low-rank constraint. For evaluation, an experiment was conducted to induce emotions by movies for dozens of young adult subjects and estimated the emotional states using the proposed method. The advantage of the proposed method is that it combines feature selection, feature extraction, and classification into a monolithic optimization problem with a fixed low-rank regularization, which implicitly estimates optimal spatial filters. The proposed method shows competitive performance against the best CSP-based alternatives.

  2. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    Science.gov (United States)

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  3. Accelerated cardiac cine MRI using locally low rank and finite difference constraints.

    Science.gov (United States)

    Miao, Xin; Lingala, Sajan Goud; Guo, Yi; Jao, Terrence; Usman, Muhammad; Prieto, Claudia; Nayak, Krishna S

    2016-07-01

    To evaluate the potential value of combining multiple constraints for highly accelerated cardiac cine MRI. A locally low rank (LLR) constraint and a temporal finite difference (FD) constraint were combined to reconstruct cardiac cine data from highly undersampled measurements. Retrospectively undersampled 2D Cartesian reconstructions were quantitatively evaluated against fully-sampled data using normalized root mean square error, structural similarity index (SSIM) and high frequency error norm (HFEN). This method was also applied to 2D golden-angle radial real-time imaging to facilitate single breath-hold whole-heart cine (12 short-axis slices, 9-13s single breath hold). Reconstruction was compared against state-of-the-art constrained reconstruction methods: LLR, FD, and k-t SLR. At 10 to 60 spokes/frame, LLR+FD better preserved fine structures and depicted myocardial motion with reduced spatio-temporal blurring in comparison to existing methods. LLR yielded higher SSIM ranking than FD; FD had higher HFEN ranking than LLR. LLR+FD combined the complimentary advantages of the two, and ranked the highest in all metrics for all retrospective undersampled cases. Single breath-hold multi-slice cardiac cine with prospective undersampling was enabled with in-plane spatio-temporal resolutions of 2×2mm(2) and 40ms. Highly accelerated cardiac cine is enabled by the combination of 2D undersampling and the synergistic use of LLR and FD constraints. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Low-rank extremal positive-partial-transpose states and unextendible product bases

    International Nuclear Information System (INIS)

    Leinaas, Jon Magne; Sollid, Per Oyvind; Myrheim, Jan

    2010-01-01

    It is known how to construct, in a bipartite quantum system, a unique low-rank entangled mixed state with positive partial transpose (a PPT state) from an unextendible product basis (UPB), defined as an unextendible set of orthogonal product vectors. We point out that a state constructed in this way belongs to a continuous family of entangled PPT states of the same rank, all related by nonsingular unitary or nonunitary product transformations. The characteristic property of a state ρ in such a family is that its kernel Ker ρ has a generalized UPB, a basis of product vectors, not necessarily orthogonal, with no product vector in Im ρ, the orthogonal complement of Ker ρ. The generalized UPB in Ker ρ has the special property that it can be transformed to orthogonal form by a product transformation. In the case of a system of dimension 3x3, we give a complete parametrization of orthogonal UPBs. This is then a parametrization of families of rank 4 entangled (and extremal) PPT states, and we present strong numerical evidence that it is a complete classification of such states. We speculate that the lowest rank entangled and extremal PPT states also in higher dimensions are related to generalized, nonorthogonal UPBs in similar ways.

  5. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    Science.gov (United States)

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Obtaining Low Rank Coal Biotransforming Bacteria from Microhabitats Enriched with Carbonaceous Residues

    International Nuclear Information System (INIS)

    Valero Valero, Nelson; Rodriguez Salazar, Luz Nidia; Mancilla Gomez, Sandra; Contreras Bayona, Leydis

    2012-01-01

    Bacteria capable of low rank coal (LRC) biotransform were isolated from environmental samples altered with coal in the mine The Cerrejon. A protocol was designed to select strains more capable of LRC biotransform, the protocol includes isolation in a selective medium with LRC powder, qualitative and quantitative tests for LRC solubilization in solid and liquid culture medium. Of 75 bacterial strains isolated, 32 showed growth in minimal salts agar with 5 % carbon. The strains that produce higher values of humic substances (HS) have a mechanism of solubilization associated with pH changes in the culture medium, probably related to the production of extracellular alkaline substances by bacteria. The largest number of strains and bacteria with more solubilizing activity on LRC were isolated from sludge with high content of carbon residue and rhizosphere of Typha domingensis and Cenchrus ciliaris grown on sediments mixed with carbon particles, this result suggests that obtaining and solubilization capacity of LRC by bacteria may be related to the microhabitat where the populations originated.

  7. Low rank approach to computing first and higher order derivatives using automatic differentiation

    International Nuclear Information System (INIS)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-01-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computing derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)

  8. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    KAUST Repository

    Zhang, Zhendong

    2017-12-17

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyze the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artifacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration (RTM) applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modeling engine performs better than an isotropic migration.

  9. Exponential Family Functional data analysis via a low-rank model.

    Science.gov (United States)

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  10. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  11. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  12. Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal

    Energy Technology Data Exchange (ETDEWEB)

    Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri, John; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Liber, Pawel; Lopez-Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh

    2012-03-30

    The purpose of this project was to evaluate the ability of advanced low rank coal gasification technology to cause a significant reduction in the COE for IGCC power plants with 90% carbon capture and sequestration compared with the COE for similarly configured IGCC plants using conventional low rank coal gasification technology. GE’s advanced low rank coal gasification technology uses the Posimetric Feed System, a new dry coal feed system based on GE’s proprietary Posimetric Feeder. In order to demonstrate the performance and economic benefits of the Posimetric Feeder in lowering the cost of low rank coal-fired IGCC power with carbon capture, two case studies were completed. In the Base Case, the gasifier was fed a dilute slurry of Montana Rosebud PRB coal using GE’s conventional slurry feed system. In the Advanced Technology Case, the slurry feed system was replaced with the Posimetric Feed system. The process configurations of both cases were kept the same, to the extent possible, in order to highlight the benefit of substituting the Posimetric Feed System for the slurry feed system.

  13. Co-pyrolysis of low rank coals and biomass: Product distributions

    Energy Technology Data Exchange (ETDEWEB)

    Soncini, Ryan M.; Means, Nicholas C.; Weiland, Nathan T.

    2013-10-01

    Pyrolysis and gasification of combined low rank coal and biomass feeds are the subject of much study in an effort to mitigate the production of green house gases from integrated gasification combined cycle (IGCC) systems. While co-feeding has the potential to reduce the net carbon footprint of commercial gasification operations, the effects of co-feeding on kinetics and product distributions requires study to ensure the success of this strategy. Southern yellow pine was pyrolyzed in a semi-batch type drop tube reactor with either Powder River Basin sub-bituminous coal or Mississippi lignite at several temperatures and feed ratios. Product gas composition of expected primary constituents (CO, CO{sub 2}, CH{sub 4}, H{sub 2}, H{sub 2}O, and C{sub 2}H{sub 4}) was determined by in-situ mass spectrometry while minor gaseous constituents were determined using a GC-MS. Product distributions are fit to linear functions of temperature, and quadratic functions of biomass fraction, for use in computational co-pyrolysis simulations. The results are shown to yield significant nonlinearities, particularly at higher temperatures and for lower ranked coals. The co-pyrolysis product distributions evolve more tar, and less char, CH{sub 4}, and C{sub 2}H{sub 4}, than an additive pyrolysis process would suggest. For lignite co-pyrolysis, CO and H{sub 2} production are also reduced. The data suggests that evolution of hydrogen from rapid pyrolysis of biomass prevents the crosslinking of fragmented aromatic structures during coal pyrolysis to produce tar, rather than secondary char and light gases. Finally, it is shown that, for the two coal types tested, co-pyrolysis synergies are more significant as coal rank decreases, likely because the initial structure in these coals contains larger pores and smaller clusters of aromatic structures which are more readily retained as tar in rapid co-pyrolysis.

  14. OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS

    Energy Technology Data Exchange (ETDEWEB)

    Constance Senior; Temi Linjewile

    2003-07-25

    This is the first Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Ceramics GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, analysis of the coal, ash and mercury speciation data from the first test series was completed. Good agreement was shown between different methods of measuring mercury in the flue gas: Ontario Hydro, semi-continuous emission monitor (SCEM) and coal composition. There was a loss of total mercury across the commercial catalysts, but not across the blank monolith. The blank monolith showed no oxidation. The data from the first test series show the same trend in mercury oxidation as a function of space velocity that has been seen elsewhere. At space velocities in the range of 6,000-7,000 hr{sup -1} the blank monolith did not show any mercury oxidation, with or without ammonia present. Two of the commercial catalysts clearly showed an effect of ammonia. Two other commercial catalysts showed an effect of ammonia, although the error bars for the no-ammonia case are large. A test plan was written for the second test series and is being reviewed.

  15. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    Science.gov (United States)

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  17. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    Directory of Open Access Journals (Sweden)

    Xin Tang

    Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our

  18. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    Science.gov (United States)

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the

  19. Low-rank coal research. Final technical report, April 1, 1988--June 30, 1989, including quarterly report, April--June 1989

    Energy Technology Data Exchange (ETDEWEB)

    1989-12-31

    This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).

  20. Formal matrices

    CERN Document Server

    Krylov, Piotr

    2017-01-01

    This monograph is a comprehensive account of formal matrices, examining homological properties of modules over formal matrix rings and summarising the interplay between Morita contexts and K theory. While various special types of formal matrix rings have been studied for a long time from several points of view and appear in various textbooks, for instance to examine equivalences of module categories and to illustrate rings with one-sided non-symmetric properties, this particular class of rings has, so far, not been treated systematically. Exploring formal matrix rings of order 2 and introducing the notion of the determinant of a formal matrix over a commutative ring, this monograph further covers the Grothendieck and Whitehead groups of rings. Graduate students and researchers interested in ring theory, module theory and operator algebras will find this book particularly valuable. Containing numerous examples, Formal Matrices is a largely self-contained and accessible introduction to the topic, assuming a sol...

  1. A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements

    KAUST Repository

    Chávez, Gustavo

    2017-03-17

    A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.

  2. Simulating propagation of decoupled elastic waves using low-rank approximate mixed-domain integral operators for anisotropic media

    KAUST Repository

    Cheng, Jiubing; Alkhalifah, Tariq Ali; Wu, Zedong; Zou, Peng; Wang, Chenlong

    2016-01-01

    In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.

  3. Simulating propagation of decoupled elastic waves using low-rank approximate mixed-domain integral operators for anisotropic media

    KAUST Repository

    Cheng, Jiubing

    2016-03-15

    In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.

  4. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  5. Canonical correlation analysis of professional stress,social support,and professional burnout among low-rank army officers

    Directory of Open Access Journals (Sweden)

    Chuan-yun LI

    2011-12-01

    Full Text Available Objective The present study investigates the influence of professional stress and social support on professional burnout among low-rank army officers.Methods The professional stress,social support,and professional burnout scales among low-rank army officers were used as test tools.Moreover,the officers of established units(battalion,company,and platoon were chosen as test subjects.Out of the 260 scales sent,226 effective scales were received.The descriptive statistic and canonical correlation analysis models were used to analyze the influence of each variable.Results The scores of low-rank army officers in the professional stress,social support,and professional burnout scales were more than average,except on two factors,namely,interpersonal support and de-individualization.The canonical analysis identified three groups of canonical correlation factors,of which two were up to a significant level(P < 0.001.After further eliminating the social support variable,the canonical correlation analysis of professional stress and burnout showed that the canonical correlation coefficients P corresponding to 1 and 2 were 0.62 and 0.36,respectively,and were up to a very significant level(P < 0.001.Conclusion The low-rank army officers experience higher professional stress and burnout levels,showing a lower sense of accomplishment,emotional exhaustion,and more serious depersonalization.However,social support can reduce the onset and seriousness of professional burnout among these officers by lessening pressure factors,such as career development,work features,salary conditions,and other personal factors.

  6. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  7. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  8. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    Science.gov (United States)

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  9. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  10. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  11. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  12. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  13. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  14. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  15. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  16. Geogenic organic contaminants in the low-rank coal-bearing Carrizo-Wilcox aquifer of East Texas, USA

    Science.gov (United States)

    Chakraborty, Jayeeta; Varonka, Matthew S.; Orem, William H.; Finkelman, Robert B.; Manton, William

    2017-01-01

    The organic composition of groundwater along the Carrizo-Wilcox aquifer in East Texas (USA), sampled from rural wells in May and September 2015, was examined as part of a larger study of the potential health and environmental effects of organic compounds derived from low-rank coals. The quality of water from the low-rank coal-bearing Carrizo-Wilcox aquifer is a potential environmental concern and no detailed studies of the organic compounds in this aquifer have been published. Organic compounds identified in the water samples included: aliphatics and their fatty acid derivatives, phenols, biphenyls, N-, O-, and S-containing heterocyclic compounds, polycyclic aromatic hydrocarbons (PAHs), aromatic amines, and phthalates. Many of the identified organic compounds (aliphatics, phenols, heterocyclic compounds, PAHs) are geogenic and originated from groundwater leaching of young and unmetamorphosed low-rank coals. Estimated concentrations of individual compounds ranged from about 3.9 to 0.01 μg/L. In many rural areas in East Texas, coal strata provide aquifers for drinking water wells. Organic compounds observed in groundwater are likely to be present in drinking water supplied from wells that penetrate the coal. Some of the organic compounds identified in the water samples are potentially toxic to humans, but at the estimated levels in these samples, the compounds are unlikely to cause acute health problems. The human health effects of low-level chronic exposure to coal-derived organic compounds in drinking water in East Texas are currently unknown, and continuing studies will evaluate possible toxicity.

  17. Recovering task fMRI signals from highly under-sampled data with low-rank and temporal subspace constraints.

    Science.gov (United States)

    Chiew, Mark; Graedel, Nadine N; Miller, Karla L

    2018-07-01

    Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Technical Note: Interleaved Bipolar Acquisition and Low-rank Reconstruction for Water-Fat Separation in MRI.

    Science.gov (United States)

    Cho, JaeJin; Park, HyunWook

    2018-05-17

    To acquire interleaved bipolar data and reconstruct the full data using low-rank property for water fat separation. Bipolar acquisition suffers from issues related to gradient switching, the opposite gradient polarities, and other system imperfections, which prevent accurate water-fat separation. In this study, an interleaved bipolar acquisition scheme and a low-rank reconstruction method were proposed to reduce issues from the bipolar gradients while achieving a short imaging time. The proposed interleaved bipolar acquisition scheme collects echo-time signals from both gradient polarities; however, the sequence increases the imaging time. To reduce the imaging time, the signals were subsampled at every dimension of k-space. The low-rank property of the bipolar acquisition was defined and exploited to estimate the full data from the acquired subsampled data. To eliminate the bipolar issues, in the proposed method, the water-fat separation was performed separately for each gradient polarity, and the results for the positive and negative gradient polarities were combined after the water-fat separation. A phantom study and in-vivo experiments were conducted on a 3T Siemens Verio system. The results for the proposed method were compared with the results of the fully sampled interleaved bipolar acquisition and Soliman's method, which was the previous water-fat separation approach for reducing the issues of bipolar gradients and accelerating the interleaved bipolar acquisition. The proposed method provided accurate water and fat images without the issues of bipolar gradients and demonstrated a better performance compared with the results of the previous methods. The water-fat separation using the bipolar acquisition has several benefits including a short echo-spacing time. However, it suffers from bipolar-gradient issues such as strong gradient switching, system imperfection, and eddy current effects. This study demonstrated that accurate water-fat separated images can

  19. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    Science.gov (United States)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  20. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  1. Ion-exchanged calcium from calcium carbonate and low-rank coals: high catalytic activity in steam gasification

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Y.; Asami, K. [Tokoku University, Sendai (Japan). Inst. for Chemical Reaction Science

    1996-03-01

    Interactions between CaCO{sub 3} and low-rank coals were examined, and the steam gasification of the resulting Ca-loaded coals was carried out at 973 K with a thermobalance. Chemical analysis and FT-IR spectra show that CaCO{sub 3} can react readily with COOH groups to form ion-exchanged Ca and CO{sub 2} when mixed with brown coal in water at room temperature. The extent of the exchange is dependent on the crystalline form of CaCO{sub 3}, and higher for aragonite naturally present in seashells and coral reef than for calcite from limestone. The FT-IR spectra reveal that ion-exchange reactions also proceed during kneading CaCO{sub 3} with low-rank coals. The exchanged Ca promotes gasification and achieves 40-60 fold rate enhancement for brown coal with a lower content of inherent minerals. Catalyst effectiveness of kneaded CaCO{sub 3} depends on the coal type, in other words, the extent of ion exchange. 11 refs., 7 figs., 3 tabs.

  2. Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction

    Science.gov (United States)

    Fang, Shiting; Wang, Huafeng; Liu, Yueliang; Zhang, Minghui; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2017-10-01

    Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm.

  3. Development of low rank coals upgrading and their CWM producing technology; Teihin`itan kaishitsu ni yoru CWM seizo gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    Sugiyama, T [Center for Coal Utilization, Japan, Tokyo (Japan); Tsurui, M; Suto, Y; Asakura, M [JGC Corp., Tokyo (Japan); Ogawa, J; Yui, M; Takano, S [Japan COM Co. Ltd., Japan, Tokyo (Japan)

    1996-09-01

    A CWM manufacturing technology was developed by means of upgrading low rank coals. Even though some low rank coals have such advantages as low ash, low sulfur and high volatile matter content, many of them are merely used on a small scale in areas near the mine-mouths because of high moisture content, low calorification and high ignitability. Therefore, discussions were given on a coal fuel manufacturing technology by which coal will be irreversibly dehydrated with as much volatile matters as possible remaining in the coal, and the coal is made high-concentration CWM, thus the coal can be safely transported and stored. The technology uses a method to treat coal with hot water under high pressure and dry it with hot water. The method performs not only removal of water, but also irreversible dehydration without losing volatile matters by decomposing hydrophilic groups on surface and blocking micro pores with volatile matters in the coal (wax and tar). The upgrading effect was verified by processing coals in a pilot plant, which derived greater calorification and higher concentration CWM than with the conventional processes. A CWM combustion test proved lower NOx, lower SOx and higher combustion rate than for bituminous coal. The ash content was also found lower. This process suits a Texaco-type gasification furnace. For a production scale of three million tons a year, the production cost is lower by 2 yen per 10 {sup 3} kcal than for heavy oil with the same sulfur content. 11 figs., 15 tabs.

  4. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    Science.gov (United States)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  5. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  6. Inverse m-matrices and ultrametric matrices

    CERN Document Server

    Dellacherie, Claude; San Martin, Jaime

    2014-01-01

    The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.

  7. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    Science.gov (United States)

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  8. Improving residue-residue contact prediction via low-rank and sparse decomposition of residue correlation matrix.

    Science.gov (United States)

    Zhang, Haicang; Gao, Yujuan; Deng, Minghua; Wang, Chao; Zhu, Jianwei; Li, Shuai Cheng; Zheng, Wei-Mou; Bu, Dongbo

    2016-03-25

    Strategies for correlation analysis in protein contact prediction often encounter two challenges, namely, the indirect coupling among residues, and the background correlations mainly caused by phylogenetic biases. While various studies have been conducted on how to disentangle indirect coupling, the removal of background correlations still remains unresolved. Here, we present an approach for removing background correlations via low-rank and sparse decomposition (LRS) of a residue correlation matrix. The correlation matrix can be constructed using either local inference strategies (e.g., mutual information, or MI) or global inference strategies (e.g., direct coupling analysis, or DCA). In our approach, a correlation matrix was decomposed into two components, i.e., a low-rank component representing background correlations, and a sparse component representing true correlations. Finally the residue contacts were inferred from the sparse component of correlation matrix. We trained our LRS-based method on the PSICOV dataset, and tested it on both GREMLIN and CASP11 datasets. Our experimental results suggested that LRS significantly improves the contact prediction precision. For example, when equipped with the LRS technique, the prediction precision of MI and mfDCA increased from 0.25 to 0.67 and from 0.58 to 0.70, respectively (Top L/10 predicted contacts, sequence separation: 5 AA, dataset: GREMLIN). In addition, our LRS technique also consistently outperforms the popular denoising technique APC (average product correction), on both local (MI_LRS: 0.67 vs MI_APC: 0.34) and global measures (mfDCA_LRS: 0.70 vs mfDCA_APC: 0.67). Interestingly, we found out that when equipped with our LRS technique, local inference strategies performed in a comparable manner to that of global inference strategies, implying that the application of LRS technique narrowed down the performance gap between local and global inference strategies. Overall, our LRS technique greatly facilitates

  9. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  10. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  11. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  12. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    Science.gov (United States)

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  13. Dynamic PET reconstruction using temporal patch-based low rank penalty for ROI-based brain kinetic analysis

    International Nuclear Information System (INIS)

    Kim, Kyungsang; Ye, Jong Chul; Son, Young Don; Cho, Zang Hee; Bresler, Yoram; Ra, Jong Beom

    2015-01-01

    Dynamic positron emission tomography (PET) is widely used to measure changes in the bio-distribution of radiopharmaceuticals within particular organs of interest over time. However, to retain sufficient temporal resolution, the number of photon counts in each time frame must be limited. Therefore, conventional reconstruction algorithms such as the ordered subset expectation maximization (OSEM) produce noisy reconstruction images, thus degrading the quality of the extracted time activity curves (TACs). To address this issue, many advanced reconstruction algorithms have been developed using various spatio-temporal regularizations. In this paper, we extend earlier results and develop a novel temporal regularization, which exploits the self-similarity of patches that are collected in dynamic images. The main contribution of this paper is to demonstrate that the correlation of patches can be exploited using a low-rank constraint that is insensitive to global intensity variations. The resulting optimization framework is, however, non-Lipschitz and non-convex due to the Poisson log-likelihood and low-rank penalty terms. Direct application of the conventional Poisson image deconvolution by an augmented Lagrangian (PIDAL) algorithm is, however, problematic due to its large memory requirements, which prevents its parallelization. Thus, we propose a novel optimization framework using the concave-convex procedure (CCCP) by exploiting the Legendre–Fenchel transform, which is computationally efficient and parallelizable. In computer simulation and a real in vivo experiment using a high-resolution research tomograph (HRRT) scanner, we confirm that the proposed algorithm can improve image quality while also extracting more accurate region of interests (ROI) based kinetic parameters. Furthermore, we show that the total reconstruction time for HRRT PET is significantly accelerated using our GPU implementation, which makes the algorithm very practical in clinical environments

  14. Development of economical and high efficient desulfurization process using low rank coal; Teitankadotan wo mochiita ankana kokoritsu datsuryuho no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Takarada, Y; Kato, K; Kuroda, M; Nakagawa, N [Gunma University, Gunma (Japan). Faculty of Engineering; Roman, M [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)

    1997-02-01

    Experiment reveals the characteristics of low rank coal serving as a desulfurizing material in fluidized coal bed reactor with oxygen-containing functional groups exchanged with Ca ions. This effort aims at identifying inexpensive Ca materials and determining the desulfurizing characteristics of Ca-carrying brown coal. A slurry of cement sludge serving as a Ca source and low rank coal is agitated for the exchange of functional groups and Ca ions, and the desulfurizing characteristics of the Ca-carrying brown coal is determined. The Ca-carrying brown coal and high-sulfur coal char is mixed and incinerated in a fluidized bed reactor, and it is found that a desulfurization rate of 75% is achieved when the Ca/S ratio is 1 in the desulfurization of SO2. This rate is far higher than the rate obtained when limestone or cement sludge without preliminary treatment is used as a desulfurizer. Next, Ca-carrying brown coal and H2S are caused to react upon each other in a fixed bed reactor, and then it is found that desulfurization characteristics are not dependent on the diameter of the Ca-carrying brown coal grain, that the coal is different from limestone in that it stays quite active against H2S for long 40 minutes after the start of the reaction, and that CaO small in crystal diameter is dispersed in quantities into the char upon thermal disintegration of Ca-carrying brown coal to cause the coal to say quite active. 5 figs.

  15. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  16. Almost commuting self-adjoint matrices: The real and self-dual cases

    Science.gov (United States)

    Loring, Terry A.; Sørensen, Adam P. W.

    2016-08-01

    We show that a pair of almost commuting self-adjoint, symmetric matrices is close to a pair of commuting self-adjoint, symmetric matrices (in a uniform way). Moreover, we prove that the same holds with self-dual in place of symmetric and also for paths of self-adjoint matrices. Since a symmetric, self-adjoint matrix is real, we get a real version of Huaxin Lin’s famous theorem on almost commuting matrices. Similarly, the self-dual case gives a version for matrices over the quaternions. To prove these results, we develop a theory of semiprojectivity for real C*-algebras and also examine various definitions of low-rank for real C*-algebras.

  17. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  18. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-01-01

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  19. Use of Green Mussel Shell as a Desulfurizer in the Blending of Low Rank Coal-Biomass Briquette Combustion

    Directory of Open Access Journals (Sweden)

    Mahidin Mahidin

    2016-08-01

    Full Text Available Calcium oxide-based material is available abundantly and naturally. A potential resource of that material comes from marine mollusk shell such as clams, scallops, mussels, oysters, winkles and nerites. The CaO-based material has exhibited a good performance as the desulfurizer oradsorbent in coal combustion in order to reduce SO2 emission. In this study, pulverized green mussel shell, without calcination, was utilized as the desulfurizer in the briquette produced from a mixture of low rank coal and palm kernel shell (PKS, also known as bio-briquette. The ratio ofcoal to PKS in the briquette was 90:10 (wt/wt. The influence of green mussel shell contents and combustion temperature were examined to prove the possible use of that materialas a desulfurizer. The ratio of Ca to S (Ca = calcium content in desulfurizer; S = sulfur content in briquette werefixed at 1:1, 1.25:1, 1.5:1, 1.75:1, and 2:1 (mole/mole. The burning (or desulfurization temperature range was 300-500 °C; the reaction time was 720 seconds and the air flow rate was 1.2 L/min. The results showed that green mussel shell can be introduced as a desulfurizer in coal briquette or bio-briquette combustions. The desulfurization process using that desulfurizer exhibited the first order reaction and the highest average efficiency of 84.5%.

  20. Thermal and chemical modifications on a low rank coal by iron addition in swept fixed by hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Mastral, A.M.; Perez-Surio, M.J.; Palacios, J.M. [CSIC, Zaragoza (Spain). Inst. de Carboquimica

    1998-05-01

    The paper discusses the thermal and chemical changes taking place on a low rank coal when it is subjected to hydropyrolysis conditions with Red Mud as the catalytic precursor. For each run, 5 g of coal were pyrolysed in a swept fixed bed reactor at 40 kg/cm{sup 2} hydrogen pressure. The variables of the process were: temperatures ranging from 400 to 600{degree}C; 0.5 and 2 l/min of hydrogen flow; 10 and 30 min residence time; and in the presence and absence of Red Mud. Conversion products distribution and a wide battery of complementary analyses allow information to be gathered regarding the changes undergone by the coal structure, both in its organic and inorganic components, in its conversion into liquids and chars. From the data obtained, it can be deduced that: (1) at 400{degree}C the iron catalyst is not active; (2) at higher temperatures iron catalytic cracking is observed more than hydrogenating activity, due to the Fe{sub 2}O{sub 3} transformation into (Fe{sub 3}S{sub 4}) crystallographically as spinel; (3) in this coal hydropyrolysis one third of the coal is converted into liquids; and (4) Red Mud helps to reduce sulfur emissions by H{sub 2}S fixation as Fe{sub 3}S{sub 4}. 10 refs., 5 figs., 5 tabs.

  1. Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal

    Energy Technology Data Exchange (ETDEWEB)

    Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri,; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh

    2012-11-30

    This report describes the development of the design of an advanced dry feed system that was carried out under Task 4.0 of Cooperative Agreement DE-FE0007902 with the US DOE, “Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the use of Low- Rank Coal.” The resulting design will be used for the advanced technology IGCC case with 90% carbon capture for sequestration to be developed under Task 5.0 of the same agreement. The scope of work covered coal preparation and feeding up through the gasifier injector. Subcomponents have been broken down into feed preparation (including grinding and drying), low pressure conveyance, pressurization, high pressure conveyance, and injection. Pressurization of the coal feed is done using Posimetric1 Feeders sized for the application. In addition, a secondary feed system is described for preparing and feeding slag additive and recycle fines to the gasifier injector. This report includes information on the basis for the design, requirements for down selection of the key technologies used, the down selection methodology and the final, down selected design for the Posimetric Feed System, or PFS.

  2. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints

    Directory of Open Access Journals (Sweden)

    Ryan Wen Liu

    2017-03-01

    Full Text Available Dynamic magnetic resonance imaging (MRI has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.

  3. Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

    KAUST Repository

    Boukaram, W.

    2015-03-25

    Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.

  4. Tensor Dictionary Learning for Positive Definite Matrices.

    Science.gov (United States)

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2015-11-01

    Sparse models have proven to be extremely successful in image processing and computer vision. However, a majority of the effort has been focused on sparse representation of vectors and low-rank models for general matrices. The success of sparse modeling, along with popularity of region covariances, has inspired the development of sparse coding approaches for these positive definite descriptors. While in earlier work, the dictionary was formed from all, or a random subset of, the training signals, it is clearly advantageous to learn a concise dictionary from the entire training set. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between sparse coding and dictionary update stages, and different atom update methods are described. A discriminative version of the dictionary learning approach is also proposed, which simultaneously learns dictionaries for different classes in classification or clustering. Experimental results demonstrate the advantage of learning dictionaries from data both from reconstruction and classification viewpoints. Finally, a software library is presented comprising C++ binaries for all the positive definite sparse coding and dictionary learning approaches presented here.

  5. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  6. Modeling and Simulation on NOx and N2O Formation in Co-combustion of Low-rank Coal and Palm Kernel Shell

    Directory of Open Access Journals (Sweden)

    Mahidin Mahidin

    2012-12-01

    Full Text Available NOx and N2O emissions from coal combustion are claimed as the major contributors for the acid rain, photochemical smog, green house and ozone depletion problems. Based on the facts, study on those emissions formation is interest topic in the combustion area. In this paper, theoretical study by modeling and simulation on NOx and N2O formation in co-combustion of low-rank coal and palm kernel shell has been done. Combustion model was developed by using the principle of chemical-reaction equilibrium. Simulation on the model in order to evaluate the composition of the flue gas was performed by minimization the Gibbs free energy. The results showed that by introduced of biomass in coal combustion can reduce the NOx concentration in considerably level. Maximum NO level in co-combustion of low-rank coal and palm kernel shell with fuel composition 1:1 is 2,350 ppm, low enough compared to single low-rank coal combustion up to 3,150 ppm. Moreover, N2O is less than 0.25 ppm in all cases. Keywords: low-rank coal, N2O emission, NOx emission, palm kernel shell

  7. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad; Hoteit, Ibrahim

    2014-01-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  8. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-02-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  9. Strategies to reduce the complexity of hydrologic data assimilation for high-dimensional models

    Science.gov (United States)

    Hernandez, F.; Liang, X.

    2017-12-01

    Probabilistic forecasts in the geosciences offer invaluable information by allowing to estimate the uncertainty of predicted conditions (including threats like floods and droughts). However, while forecast systems based on modern data assimilation algorithms are capable of producing multi-variate probability distributions of future conditions, the computational resources required to fully characterize the dependencies between the model's state variables render their applicability impractical for high-resolution cases. This occurs because of the quadratic space complexity of storing the covariance matrices that encode these dependencies and the cubic time complexity of performing inference operations with them. In this work we introduce two complementary strategies to reduce the size of the covariance matrices that are at the heart of Bayesian assimilation methods—like some variants of (ensemble) Kalman filters and of particle filters—and variational methods. The first strategy involves the optimized grouping of state variables by clustering individual cells of the model into "super-cells." A dynamic fuzzy clustering approach is used to take into account the states (e.g., soil moisture) and forcings (e.g., precipitation) of each cell at each time step. The second strategy consists in finding a compressed representation of the covariance matrix that still encodes the most relevant information but that can be more efficiently stored and processed. A learning and a belief-propagation inference algorithm are developed to take advantage of this modified low-rank representation. The two proposed strategies are incorporated into OPTIMISTS, a state-of-the-art hybrid Bayesian/variational data assimilation algorithm, and comparative streamflow forecasting tests are performed using two watersheds modeled with the Distributed Hydrology Soil Vegetation Model (DHSVM). Contrasts are made between the efficiency gains and forecast accuracy losses of each strategy used in

  10. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    Directory of Open Access Journals (Sweden)

    Zutao Zhang

    2016-06-01

    Full Text Available Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.

  11. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  12. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  13. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  14. Simulating propagation of decomposed elastic waves using low-rank approximate mixed-domain integral operators for heterogeneous transversely isotropic media

    KAUST Repository

    Cheng, Jiubing

    2014-08-05

    In elastic imaging, the extrapolated vector fields are decomposed into pure wave modes, such that the imaging condition produces interpretable images, which characterize reflectivity of different reflection types. Conventionally, wavefield decomposition in anisotropic media is costly as the operators involved is dependent on the velocity, and thus not stationary. In this abstract, we propose an efficient approach to directly extrapolate the decomposed elastic waves using lowrank approximate mixed space/wavenumber domain integral operators for heterogeneous transverse isotropic (TI) media. The low-rank approximation is, thus, applied to the pseudospectral extrapolation and decomposition at the same time. The pseudo-spectral implementation also allows for relatively large time steps in which the low-rank approximation is applied. Synthetic examples show that it can yield dispersionfree extrapolation of the decomposed quasi-P (qP) and quasi- SV (qSV) modes, which can be used for imaging, as well as the total elastic wavefields.

  15. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  16. Contributions to Large Covariance and Inverse Covariance Matrices Estimation

    OpenAIRE

    Kang, Xiaoning

    2016-01-01

    Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...

  17. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  18. Computing Low-Rank Approximation of a Dense Matrix on Multicore CPUs with a GPU and Its Application to Solving a Hierarchically Semiseparable Linear System of Equations

    Directory of Open Access Journals (Sweden)

    Ichitaro Yamazaki

    2015-01-01

    of their low-rank properties. To compute a low-rank approximation of a dense matrix, in this paper, we study the performance of QR factorization with column pivoting or with restricted pivoting on multicore CPUs with a GPU. We first propose several techniques to reduce the postprocessing time, which is required for restricted pivoting, on a modern CPU. We then examine the potential of using a GPU to accelerate the factorization process with both column and restricted pivoting. Our performance results on two eight-core Intel Sandy Bridge CPUs with one NVIDIA Kepler GPU demonstrate that using the GPU, the factorization time can be reduced by a factor of more than two. In addition, to study the performance of our implementations in practice, we integrate them into a recently developed software StruMF which algebraically exploits such low-rank structures for solving a general sparse linear system of equations. Our performance results for solving Poisson's equations demonstrate that the proposed techniques can significantly reduce the preconditioner construction time of StruMF on the CPUs, and the construction time can be further reduced by 10%–50% using the GPU.

  19. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  20. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  1. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  2. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  3. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  4. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  5. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  6. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  7. Realm of Matrices

    Indian Academy of Sciences (India)

    IAS Admin

    harmonic analysis and complex analysis, in ... gebra describes not only the study of linear transforma- tions and .... special case of the Jordan canonical form of matrices. ..... Richard Bronson, Schaum's Outline Series Theory And Problems Of.

  8. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  9. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    Science.gov (United States)

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed

  10. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  11. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  12. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  13. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  14. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  15. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  16. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  17. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  18. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  19. Chemiluminescence in cryogenic matrices

    Science.gov (United States)

    Lotnik, S. V.; Kazakov, Valeri P.

    1989-04-01

    The literature data on chemiluminescence (CL) in cryogenic matrices have been classified and correlated for the first time. The role of studies on phosphorescence and CL at low temperatures in the development of cryochemistry is shown. The features of low-temperature CL in matrices of nitrogen and inert gases (fine structure of spectra, matrix effects) and the data on the mobility and reactivity of atoms and radicals at very low temperatures are examined. The trends in the development of studies on CL in cryogenic matrices, such as the search for systems involving polyatomic molecules and extending the forms of CL reactions, are followed. The reactions of active nitrogen with hydrocarbons that are accompanied by light emission and CL in the oxidation of carbenes at T >= 77 K are examined. The bibliography includes 112 references.

  20. Change in surface characteristics of coal in upgrading of low-rank coals; Teihin`itan kaishitsu process ni okeru sekitan hyomen seijo no henka

    Energy Technology Data Exchange (ETDEWEB)

    Oki, A.; Xie, X.; Nakajima, T.; Maeda, S. [Kagoshima University, Kagoshima (Japan). Faculty of Engineering

    1996-10-28

    With an objective to learn mechanisms in low-rank coal reformation processes, change of properties on coal surface was discussed. Difficulty in handling low-rank coal is attributed to large intrinsic water content. Since it contains highly volatile components, it has a danger of spontaneous ignition. The hot water drying (HWD) method was used for reformation. Coal which has been dry-pulverized to a grain size of 1 mm or smaller was mixed with water to make slurry, heated in an autoclave, cooled, filtered, and dried in vacuum. The HWD applied to Loy Yang and Yallourn coals resulted in rapid rise in pressure starting from about 250{degree}C. Water content (ANA value) absorbed into the coal has decreased largely, with the surface made hydrophobic effectively due to high temperature and pressure. Hydroxyl group and carbonyl group contents in the coal have decreased largely with rising reformation treatment temperature (according to FT-IR measurement). Specific surface area of the original coal of the Loy Yang coal was 138 m{sup 2}/g, while it has decreased largely to 73 m{sup 2}/g when the reformation temperature was raised to 350{degree}C. This is because of volatile components dissolving from the coal as tar and blocking the surface pores. 2 refs., 4 figs.

  1. Formation of N2 in the fixed-bed pyrolysis of low rank coals and the mechanisms; Koteisho netsubunkai ni okeru teitankatan kara no N2 no sisei

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Z.; Otsuka, Y. [Tohoku University, Sendai (Japan). Institute for Chemical Reaction Science

    1996-10-28

    In order to establish coal NOx preventive measures, discussions were given on formation of N2 in the fixed-bed pyrolysis of low rank coals and the mechanisms thereof. Chinese ZN coal and German RB coal were used for the discussions. Both coals do not produce N2 at 600{degree}C, and the main product is volatile nitrogen. Conversion into N2 does not depend on heating rates, but increases linearly with increasing temperature, and reaches 65% to 70% at 1200{degree}C. In contrast, char nitrogen decreases linearly with the temperature. More specifically, these phenomena suggest that the char nitrogen or its precursor is the major supply source of N2. When mineral substances are removed by using hydrochloric acid, their catalytic action is lost, and conversion into N2 decreases remarkably. Iron existing in ion-exchanged condition in low-rank coal is reduced and finely diffused into metallic iron particles. The particles react with heterocyclic nitrogen compounds and turn into iron nitride. A solid phase reaction mechanism may be conceived, in which N2 is produced due to decomposition of the iron nitride. 5 refs., 4 figs., 1 tab.

  2. Effect of blending ratio to the liquid product on co-pyrolysis of low rank coal and oil palm empty fruit bunch

    Directory of Open Access Journals (Sweden)

    Zullaikah Siti

    2018-01-01

    Full Text Available The utilization of Indonesia low rank coal should be maximized, since the source of Indonesia law rank coals were abundant. Pyrolysis of this coal can produce liquid product which can be utilized as fuel and chemical feedstocks. The yield of liquid product is still low due to lower of comparison H/C. Since coal is non-renewable source, an effort of coal saving and to mitigate the production of greenhouse gases, biomass such as oil palm empty fruit bunch (EFB would added as co-feeding. EFB could act as hydrogen donor in co-pyrolysis to increase liquid product. Co-pyrolysis of Indonesia low rank coal and EFB were studied in a drop tube reactor under the certain temperature (t= 500 °C and time (t= 1 h used N2 as purge gas. The effect of blending ratios of coal/EFB (100/0, 75/25, 50/50, 25/75 and 0/100%, w/w % on the yield and composition of liquid product were studied systematically. The results showed that the higher blending ratio, the yield of liquid product and gas obtained increased, while the char decreased. The highest yield of liquid product (28,62 % was obtained used blending ratio of coal/EFB = 25/75, w/w%. Tar composition obtained in this ratio is phenol, polycyclic aromatic hydrocarbons, alkanes, acids, esters.

  3. Thermal characteristics and surface morphology of char during co-pyrolysis of low-rank coal blended with microalgal biomass: Effects of Nannochloropsis and Chlorella.

    Science.gov (United States)

    Wu, Zhiqiang; Yang, Wangcai; Yang, Bolun

    2018-02-01

    In this work, the influence of Nannochloropsis and Chlorella on the thermal behavior and surface morphology of char during the co-pyrolysis process were explored. Thermogravimetric and iso-conversional methods were applied to analyzing the pyrolytic and kinetic characteristics for different mass ratios of microalgae and low-rank coal (0, 3:1, 1:1, 1:3 and 1). Fractal theory was used to quantitatively determine the effect of microalgae on the morphological texture of co-pyrolysis char. The result indicated that both the Nannochloropsis and Chlorella promoted the release of volatile from low-rank coal. Different synergistic effects on the thermal parameters and yield of volatile were observed, which could be attributed to the different compositions in the Nannochloropsis and Chlorella and operating condition. The distribution of activation energies shows nonadditive characteristics. Fractal dimensions of the co-pyrolysis char were higher than the individual char, indicating the promotion of disordered degree due to the addition of microalgae. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Matrices in Engineering Problems

    CERN Document Server

    Tobias, Marvin

    2011-01-01

    This book is intended as an undergraduate text introducing matrix methods as they relate to engineering problems. It begins with the fundamentals of mathematics of matrices and determinants. Matrix inversion is discussed, with an introduction of the well known reduction methods. Equation sets are viewed as vector transformations, and the conditions of their solvability are explored. Orthogonal matrices are introduced with examples showing application to many problems requiring three dimensional thinking. The angular velocity matrix is shown to emerge from the differentiation of the 3-D orthogo

  5. Infinite matrices and sequence spaces

    CERN Document Server

    Cooke, Richard G

    2014-01-01

    This clear and correct summation of basic results from a specialized field focuses on the behavior of infinite matrices in general, rather than on properties of special matrices. Three introductory chapters guide students to the manipulation of infinite matrices, covering definitions and preliminary ideas, reciprocals of infinite matrices, and linear equations involving infinite matrices.From the fourth chapter onward, the author treats the application of infinite matrices to the summability of divergent sequences and series from various points of view. Topics include consistency, mutual consi

  6. Capture Matrices Handbook

    Science.gov (United States)

    2014-04-01

    materials, the affinity ligand would need identification , as well as chemistries that graft the affinity ligand onto the surface of magnetic...ACTIVE CAPTURE MATRICES FOR THE DETECTION/ IDENTIFICATION OF PHARMACEUTICALS...6 As shown in Figure 2.3-1a, the spectra exhibit similar baselines and the spectral peaks lineup . Under these circumstances, the spectral

  7. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  8. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  9. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  10. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  11. Improving temporal resolution in fMRI using a 3D spiral acquisition and low rank plus sparse (L+S) reconstruction.

    Science.gov (United States)

    Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V

    2017-08-15

    Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an

  12. MO-DE-207A-05: Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT

    International Nuclear Information System (INIS)

    Xu, Q; Liu, H; Xing, L; Yu, H; Wang, G

    2016-01-01

    Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral

  13. MO-DE-207A-05: Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q [Xi’an Jiaotong University, Xi’an (China); Stanford University School of Medicine, Stanford, CA (United States); Liu, H; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Yu, H [University of Massachusetts Lowell, Lowell, MA (United States); Wang, G [Rensselaer Polytechnic Instute., Troy, NY (United States)

    2016-06-15

    Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral

  14. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  15. Introduction to matrices and vectors

    CERN Document Server

    Schwartz, Jacob T

    2001-01-01

    In this concise undergraduate text, the first three chapters present the basics of matrices - in later chapters the author shows how to use vectors and matrices to solve systems of linear equations. 1961 edition.

  16. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  17. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  18. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  20. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  1. Graphs and matrices

    CERN Document Server

    Bapat, Ravindra B

    2014-01-01

    This new edition illustrates the power of linear algebra in the study of graphs. The emphasis on matrix techniques is greater than in other texts on algebraic graph theory. Important matrices associated with graphs (for example, incidence, adjacency and Laplacian matrices) are treated in detail. Presenting a useful overview of selected topics in algebraic graph theory, early chapters of the text focus on regular graphs, algebraic connectivity, the distance matrix of a tree, and its generalized version for arbitrary graphs, known as the resistance matrix. Coverage of later topics include Laplacian eigenvalues of threshold graphs, the positive definite completion problem and matrix games based on a graph. Such an extensive coverage of the subject area provides a welcome prompt for further exploration. The inclusion of exercises enables practical learning throughout the book. In the new edition, a new chapter is added on the line graph of a tree, while some results in Chapter 6 on Perron-Frobenius theory are reo...

  2. Hierarchical quark mass matrices

    International Nuclear Information System (INIS)

    Rasin, A.

    1998-02-01

    I define a set of conditions that the most general hierarchical Yukawa mass matrices have to satisfy so that the leading rotations in the diagonalization matrix are a pair of (2,3) and (1,2) rotations. In addition to Fritzsch structures, examples of such hierarchical structures include also matrices with (1,3) elements of the same order or even much larger than the (1,2) elements. Such matrices can be obtained in the framework of a flavor theory. To leading order, the values of the angle in the (2,3) plane (s 23 ) and the angle in the (1,2) plane (s 12 ) do not depend on the order in which they are taken when diagonalizing. We find that any of the Cabbibo-Kobayashi-Maskawa matrix parametrizations that consist of at least one (1,2) and one (2,3) rotation may be suitable. In the particular case when the s 13 diagonalization angles are sufficiently small compared to the product s 12 s 23 , two special CKM parametrizations emerge: the R 12 R 23 R 12 parametrization follows with s 23 taken before the s 12 rotation, and vice versa for the R 23 R 12 R 23 parametrization. (author)

  3. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  4. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  5. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  6. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  7. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  8. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  9. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  10. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  11. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  12. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  13. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  14. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  15. Lectures on matrices

    CERN Document Server

    M Wedderburn, J H

    1934-01-01

    It is the organization and presentation of the material, however, which make the peculiar appeal of the book. This is no mere compendium of results-the subject has been completely reworked and the proofs recast with the skill and elegance which come only from years of devotion. -Bulletin of the American Mathematical Society The very clear and simple presentation gives the reader easy access to the more difficult parts of the theory. -Jahrbuch über die Fortschritte der Mathematik In 1937, the theory of matrices was seventy-five years old. However, many results had only recently evolved from sp

  16. Matrices and linear algebra

    CERN Document Server

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  17. Intermittency and random matrices

    Science.gov (United States)

    Sokoloff, Dmitry; Illarionov, E. A.

    2015-08-01

    A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.

  18. Dimension from covariance matrices.

    Science.gov (United States)

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  19. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  20. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  1. FY1995 development of economical and high efficient desulfurization process using low rank coal; 1995 nendo teitankadotan wo mochiita ankana kokoritsu datsuryuho no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of this study is to develop a new efficient desulfurization technique using a Ca ion-exchanged coal prepared from low rank coal and calcium raw material as a SO{sub 2} sorbent. Ion-exchange of calcium was carried out by soaking and mixing brown coal particles in milk of lime or slurry of industrial waste from concrete manufacture process. About 10wt% of Ca was easily incorporated into Yallourn coal. The ion-exchanged Ca was transformed to ultra-fine CaO particles upon pyrolysis of coal. The reactivity of CaO produced from Ca-exchanged coal to SO{sub 2} was extraordinary high and the CaO utilization of above 90% was easily achieved, while the conversion of natural limestone was less than 30% under the similar experimental conditions. High activity of Ca-exchanged coal was appreciably observed in a pressurized fluidized bed combustor. Ca-exchanged coal was quite effective for the removal of hydrogen sulfide. (NEDO)

  2. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  3. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  4. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  5. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  6. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  7. Complex Wedge-Shaped Matrices: A Generalization of Jacobi Matrices

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, M.

    2015-01-01

    Roč. 487, 15 December (2015), s. 203-219 ISSN 0024-3795 R&D Projects: GA ČR GA13-06684S Keywords : eigenvalues * eigenvector * wedge-shaped matrices * generalized Jacobi matrices * band (or block) Krylov subspace methods Subject RIV: BA - General Mathematics Impact factor: 0.965, year: 2015

  8. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    Science.gov (United States)

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  9. Generalisations of Fisher Matrices

    Directory of Open Access Journals (Sweden)

    Alan Heavens

    2016-06-01

    Full Text Available Fisher matrices play an important role in experimental design and in data analysis. Their primary role is to make predictions for the inference of model parameters—both their errors and covariances. In this short review, I outline a number of extensions to the simple Fisher matrix formalism, covering a number of recent developments in the field. These are: (a situations where the data (in the form of ( x , y pairs have errors in both x and y; (b modifications to parameter inference in the presence of systematic errors, or through fixing the values of some model parameters; (c Derivative Approximation for LIkelihoods (DALI - higher-order expansions of the likelihood surface, going beyond the Gaussian shape approximation; (d extensions of the Fisher-like formalism, to treat model selection problems with Bayesian evidence.

  10. Random volumes from matrices

    Energy Technology Data Exchange (ETDEWEB)

    Fukuma, Masafumi; Sugishita, Sotaro; Umeda, Naoya [Department of Physics, Kyoto University,Kitashirakawa Oiwake-cho, Kyoto 606-8502 (Japan)

    2015-07-17

    We propose a class of models which generate three-dimensional random volumes, where each configuration consists of triangles glued together along multiple hinges. The models have matrices as the dynamical variables and are characterized by semisimple associative algebras A. Although most of the diagrams represent configurations which are not manifolds, we show that the set of possible diagrams can be drastically reduced such that only (and all of the) three-dimensional manifolds with tetrahedral decompositions appear, by introducing a color structure and taking an appropriate large N limit. We examine the analytic properties when A is a matrix ring or a group ring, and show that the models with matrix ring have a novel strong-weak duality which interchanges the roles of triangles and hinges. We also give a brief comment on the relationship of our models with the colored tensor models.

  11. Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2017-01-01

    This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.

  12. Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications

    KAUST Repository

    Elkhalil, Khalil

    2017-07-31

    This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.

  13. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  14. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  15. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  16. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  17. VanderLaan Circulant Type Matrices

    Directory of Open Access Journals (Sweden)

    Hongyan Pan

    2015-01-01

    Full Text Available Circulant matrices have become a satisfactory tools in control methods for modern complex systems. In the paper, VanderLaan circulant type matrices are presented, which include VanderLaan circulant, left circulant, and g-circulant matrices. The nonsingularity of these special matrices is discussed by the surprising properties of VanderLaan numbers. The exact determinants of VanderLaan circulant type matrices are given by structuring transformation matrices, determinants of well-known tridiagonal matrices, and tridiagonal-like matrices. The explicit inverse matrices of these special matrices are obtained by structuring transformation matrices, inverses of known tridiagonal matrices, and quasi-tridiagonal matrices. Three kinds of norms and lower bound for the spread of VanderLaan circulant and left circulant matrix are given separately. And we gain the spectral norm of VanderLaan g-circulant matrix.

  18. Diagonalization of the mass matrices

    International Nuclear Information System (INIS)

    Rhee, S.S.

    1984-01-01

    It is possible to make 20 types of 3x3 mass matrices which are hermitian. We have obtained unitary matrices which could diagonalize each mass matrix. Since the three elements of mass matrix can be expressed in terms of the three eigenvalues, msub(i), we can also express the unitary matrix in terms of msub(i). (Author)

  19. Enhancing Understanding of Transformation Matrices

    Science.gov (United States)

    Dick, Jonathan; Childrey, Maria

    2012-01-01

    With the Common Core State Standards' emphasis on transformations, teachers need a variety of approaches to increase student understanding. Teaching matrix transformations by focusing on row vectors gives students tools to create matrices to perform transformations. This empowerment opens many doors: Students are able to create the matrices for…

  20. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  1. Intrinsic character of Stokes matrices

    Science.gov (United States)

    Gagnon, Jean-François; Rousseau, Christiane

    2017-02-01

    Two germs of linear analytic differential systems x k + 1Y‧ = A (x) Y with a non-resonant irregular singularity are analytically equivalent if and only if they have the same eigenvalues and equivalent collections of Stokes matrices. The Stokes matrices are the transition matrices between sectors on which the system is analytically equivalent to its formal normal form. Each sector contains exactly one separating ray for each pair of eigenvalues. A rotation in S allows supposing that R+ lies in the intersection of two sectors. Reordering of the coordinates of Y allows ordering the real parts of the eigenvalues, thus yielding triangular Stokes matrices. However, the choice of the rotation in x is not canonical. In this paper we establish how the collection of Stokes matrices depends on this rotation, and hence on a chosen order of the projection of the eigenvalues on a line through the origin.

  2. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  3. A novel solar energy integrated low-rank coal fired power generation using coal pre-drying and an absorption heat pump

    International Nuclear Information System (INIS)

    Xu, Cheng; Bai, Pu; Xin, Tuantuan; Hu, Yue; Xu, Gang; Yang, Yongping

    2017-01-01

    Highlights: •An improved solar energy integrated LRC fired power generation is proposed. •High efficient and economic feasible solar energy conversion is achieved. •Cold-end losses of the boiler and condenser are reduced. •The energy and exergy efficiencies of the overall system are improved. -- Abstract: A novel solar energy integrated low-rank coal (LRC) fired power generation using coal pre-drying and an absorption heat pump (AHP) was proposed. The proposed integrated system efficiently utilizes the solar energy collected from the parabolic trough to drive the AHP to absorb the low-grade waste heat of the steam cycle, achieving larger amount of heat with suitable temperature for coal’s moisture removal prior to the furnace. Through employing the proposed system, the solar energy could be partially converted into the high-grade coal’s heating value and the cold-end losses of the boiler and the steam cycle could be reduced simultaneously, leading to a high-efficient solar energy conversion together with a preferable overall thermal efficiency of the power generation. The results of the detailed thermodynamic and economic analyses showed that, using the proposed integrated concept in a typical 600 MW LRC-fired power plant could reduce the raw coal consumption by 4.6 kg/s with overall energy and exergy efficiencies improvement of 1.2 and 1.8 percentage points, respectively, as 73.0 MW th solar thermal energy was introduced. The cost of the solar generated electric power could be as low as $0.044/kW h. This work provides an improved concept to further advance the solar energy conversion and utilisation in solar-hybrid coal-fired power generation.

  4. PhyloPythiaS+: a self-training method for the rapid reconstruction of low-ranking taxonomic bins from metagenomes.

    Science.gov (United States)

    Gregor, Ivan; Dröge, Johannes; Schirmer, Melanie; Quince, Christopher; McHardy, Alice C

    2016-01-01

    Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into 'bins' representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies 'training' sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S) software. The new (+) component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4-6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.

  5. Special matrices of mathematical physics stochastic, circulant and Bell matrices

    CERN Document Server

    Aldrovandi, R

    2001-01-01

    This book expounds three special kinds of matrices that are of physical interest, centering on physical examples. Stochastic matrices describe dynamical systems of many different types, involving (or not) phenomena like transience, dissipation, ergodicity, nonequilibrium, and hypersensitivity to initial conditions. The main characteristic is growth by agglomeration, as in glass formation. Circulants are the building blocks of elementary Fourier analysis and provide a natural gateway to quantum mechanics and noncommutative geometry. Bell polynomials offer closed expressions for many formulas co

  6. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  7. PhyloPythiaS+: a self-training method for the rapid reconstruction of low-ranking taxonomic bins from metagenomes

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2016-02-01

    Full Text Available Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into ‘bins’ representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies ‘training’ sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S software. The new (+ component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4–6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.

  8. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  9. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  10. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  11. The invariant theory of matrices

    CERN Document Server

    Concini, Corrado De

    2017-01-01

    This book gives a unified, complete, and self-contained exposition of the main algebraic theorems of invariant theory for matrices in a characteristic free approach. More precisely, it contains the description of polynomial functions in several variables on the set of m\\times m matrices with coefficients in an infinite field or even the ring of integers, invariant under simultaneous conjugation. Following Hermann Weyl's classical approach, the ring of invariants is described by formulating and proving the first fundamental theorem that describes a set of generators in the ring of invariants, and the second fundamental theorem that describes relations between these generators. The authors study both the case of matrices over a field of characteristic 0 and the case of matrices over a field of positive characteristic. While the case of characteristic 0 can be treated following a classical approach, the case of positive characteristic (developed by Donkin and Zubkov) is much harder. A presentation of this case...

  12. Quantum matrices in two dimensions

    International Nuclear Information System (INIS)

    Ewen, H.; Ogievetsky, O.; Wess, J.

    1991-01-01

    Quantum matrices in two-dimensions, admitting left and right quantum spaces, are classified: they fall into two families, the 2-parametric family GL p,q (2) and a 1-parametric family GL α J (2). Phenomena previously found for GL p,q (2) hold in this general situation: (a) powers of quantum matrices are again quantum and (b) entries of the logarithm of a two-dimensional quantum matrix form a Lie algebra. (orig.)

  13. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.

    2017-09-07

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  14. Hierarchical Decompositions for the Computation of High-Dimensional Multivariate Normal Probabilities

    KAUST Repository

    Genton, Marc G.; Keyes, David E.; Turkiyyah, George

    2017-01-01

    We present a hierarchical decomposition scheme for computing the n-dimensional integral of multivariate normal probabilities that appear frequently in statistics. The scheme exploits the fact that the formally dense covariance matrix can be approximated by a matrix with a hierarchical low rank structure. It allows the reduction of the computational complexity per Monte Carlo sample from O(n2) to O(mn+knlog(n/m)), where k is the numerical rank of off-diagonal matrix blocks and m is the size of small diagonal blocks in the matrix that are not well-approximated by low rank factorizations and treated as dense submatrices. This hierarchical decomposition leads to substantial efficiencies in multivariate normal probability computations and allows integrations in thousands of dimensions to be practical on modern workstations.

  15. Manin matrices and Talalaev's formula

    International Nuclear Information System (INIS)

    Chervov, A; Falqui, G

    2008-01-01

    In this paper we study properties of Lax and transfer matrices associated with quantum integrable systems. Our point of view stems from the fact that their elements satisfy special commutation properties, considered by Yu I Manin some 20 years ago at the beginning of quantum group theory. These are the commutation properties of matrix elements of linear homomorphisms between polynomial rings; more explicitly these read: (1) elements of the same column commute; (2) commutators of the cross terms are equal: [M ij , M kl ] [M kj , M il ] (e.g. [M 11 , M 22 ] = [M 21 , M 12 ]). The main aim of this paper is twofold: on the one hand we observe and prove that such matrices (which we call Manin matrices in short) behave almost as well as matrices with commutative elements. Namely, the theorems of linear algebra (e.g., a natural definition of the determinant, the Cayley-Hamilton theorem, the Newton identities and so on and so forth) have a straightforward counterpart in the case of Manin matrices. On the other hand, we remark that such matrices are somewhat ubiquitous in the theory of quantum integrability. For instance, Manin matrices (and their q-analogs) include matrices satisfying the Yang-Baxter relation 'RTT=TTR' and the so-called Cartier-Foata matrices. Also, they enter Talalaev's remarkable formulae: det(∂ z -L gaudin (z)), det(1-e -∂z T Yangian (z)) for the 'quantum spectral curve', and appear in the separation of variables problem and Capelli identities. We show that theorems of linear algebra, after being established for such matrices, have various applications to quantum integrable systems and Lie algebras, e.g. in the construction of new generators in Z(U crit (gl-hat n )) (and, in general, in the construction of quantum conservation laws), in the Knizhnik-Zamolodchikov equation, and in the problem of Wick ordering. We propose, in the appendix, a construction of quantum separated variables for the XXX-Heisenberg system

  16. On reflectionless equi-transmitting matrices

    Directory of Open Access Journals (Sweden)

    Pavel Kurasov

    2014-01-01

    Full Text Available Reflectionless equi-transmitting unitary matrices are studied in connection to matching conditions in quantum graphs. All possible such matrices of size 6 are described explicitly. It is shown that such matrices form 30 six-parameter families intersected along 12 five-parameter families closely connected to conference matrices.

  17. Spectra of sparse random matrices

    International Nuclear Information System (INIS)

    Kuehn, Reimer

    2008-01-01

    We compute the spectral density for ensembles of sparse symmetric random matrices using replica. Our formulation of the replica-symmetric ansatz shares the symmetries of that suggested in a seminal paper by Rodgers and Bray (symmetry with respect to permutation of replica and rotation symmetry in the space of replica), but uses a different representation in terms of superpositions of Gaussians. It gives rise to a pair of integral equations which can be solved by a stochastic population-dynamics algorithm. Remarkably our representation allows us to identify pure-point contributions to the spectral density related to the existence of normalizable eigenstates. Our approach is not restricted to matrices defined on graphs with Poissonian degree distribution. Matrices defined on regular random graphs or on scale-free graphs, are easily handled. We also look at matrices with row constraints such as discrete graph Laplacians. Our approach naturally allows us to unfold the total density of states into contributions coming from vertices of different local coordinations and an example of such an unfolding is presented. Our results are well corroborated by numerical diagonalization studies of large finite random matrices

  18. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index ... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...

  19. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  20. Chequered surfaces and complex matrices

    International Nuclear Information System (INIS)

    Morris, T.R.; Southampton Univ.

    1991-01-01

    We investigate a large-N matrix model involving general complex matrices. It can be reinterpreted as a model of two hermitian matrices with specific couplings, and as a model of positive definite hermitian matrices. Large-N perturbation theory generates dynamical triangulations in which the triangles can be chequered (i.e. coloured so that neighbours are opposite colours). On a sphere there is a simple relation between such triangulations and those generated by the single hermitian matrix model. For the torus (and a quartic potential) we solve the counting problem for the number of triangulations that cannot be quechered. The critical physics of chequered triangulations is the same as that of the hermitian matrix model. We show this explicitly by solving non-perturbatively pure two-dimensional ''chequered'' gravity. The interpretative framework given here applies to a number of other generalisations of the hermitian matrix model. (orig.)

  1. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  2. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  3. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  4. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  5. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  6. Rank reduction of correlation matrices by majorization

    NARCIS (Netherlands)

    R. Pietersz (Raoul); P.J.F. Groenen (Patrick)

    2004-01-01

    textabstractIn this paper a novel method is developed for the problem of finding a low-rank correlation matrix nearest to a given correlation matrix. The method is based on majorization and therefore it is globally convergent. The method is computationally efficient, is straightforward to implement,

  7. Efficient Rank Reduction of Correlation Matrices

    NARCIS (Netherlands)

    I. Grubisic (Igor); R. Pietersz (Raoul)

    2005-01-01

    textabstractGeometric optimisation algorithms are developed that efficiently find the nearest low-rank correlation matrix. We show, in numerical tests, that our methods compare favourably to the existing methods in the literature. The connection with the Lagrange multiplier method is established,

  8. Loop diagrams without γ matrices

    International Nuclear Information System (INIS)

    McKeon, D.G.C.; Rebhan, A.

    1993-01-01

    By using a quantum-mechanical path integral to compute matrix elements of the form left-angle x|exp(-iHt)|y right-angle, radiative corrections in quantum-field theory can be evaluated without encountering loop-momentum integrals. In this paper we demonstrate how Dirac γ matrices that occur in the proper-time ''Hamiltonian'' H lead to the introduction of a quantum-mechanical path integral corresponding to a superparticle analogous to one proposed recently by Fradkin and Gitman. Direct evaluation of this path integral circumvents many of the usual algebraic manipulations of γ matrices in the computation of quantum-field-theoretical Green's functions involving fermions

  9. Immanant Conversion on Symmetric Matrices

    Directory of Open Access Journals (Sweden)

    Purificação Coelho M.

    2014-01-01

    Full Text Available Letr Σn(C denote the space of all n χ n symmetric matrices over the complex field C. The main objective of this paper is to prove that the maps Φ : Σn(C -> Σn (C satisfying for any fixed irre- ducible characters X, X' -SC the condition dx(A +aB = dχ·(Φ(Α + αΦ(Β for all matrices A,В ε Σ„(С and all scalars a ε C are automatically linear and bijective. As a corollary of the above result we characterize all such maps Φ acting on ΣИ(С.

  10. On families of anticommuting matrices

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel

    2016-01-01

    Roč. 493, March 15 (2016), s. 494-507 ISSN 0024-3795 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : anticommuting matrices * sum-of-squares formulas Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S0024379515007296

  11. On families of anticommuting matrices

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel

    2016-01-01

    Roč. 493, March 15 (2016), s. 494-507 ISSN 0024-3795 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : anticommuting matrices * sum -of-squares formulas Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S0024379515007296

  12. The modified Gauss diagonalization of polynomial matrices

    International Nuclear Information System (INIS)

    Saeed, K.

    1982-10-01

    The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)

  13. Double stochastic matrices in quantum mechanics

    International Nuclear Information System (INIS)

    Louck, J.D.

    1997-01-01

    The general set of doubly stochastic matrices of order n corresponding to ordinary nonrelativistic quantum mechanical transition probability matrices is given. Lande's discussion of the nonquantal origin of such matrices is noted. Several concrete examples are presented for elementary and composite angular momentum systems with the focus on the unitary symmetry associated with such systems in the spirit of the recent work of Bohr and Ulfbeck. Birkhoff's theorem on doubly stochastic matrices of order n is reformulated in a geometrical language suitable for application to the subset of quantum mechanical doubly stochastic matrices. Specifically, it is shown that the set of points on the unit sphere in cartesian n'-space is subjective with the set of doubly stochastic matrices of order n. The question is raised, but not answered, as to what is the subset of points of this unit sphere that correspond to the quantum mechanical transition probability matrices, and what is the symmetry group of this subset of matrices

  14. Virial expansion for almost diagonal random matrices

    Science.gov (United States)

    Yevtushenko, Oleg; Kravtsov, Vladimir E.

    2003-08-01

    Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\

  15. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  16. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  17. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  18. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  19. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  20. Global stability analysis of epidemiological models based on Volterra–Lyapunov stable matrices

    International Nuclear Information System (INIS)

    Liao Shu; Wang Jin

    2012-01-01

    Highlights: ► Global dynamics of high dimensional dynamical systems. ► A systematic approach for global stability analysis. ► Epidemiological models of environment-dependent diseases. - Abstract: In this paper, we study the global dynamics of a class of mathematical epidemiological models formulated by systems of differential equations. These models involve both human population and environmental component(s) and constitute high-dimensional nonlinear autonomous systems, for which the global asymptotic stability of the endemic equilibria has been a major challenge in analyzing the dynamics. By incorporating the theory of Volterra–Lyapunov stable matrices into the classical method of Lyapunov functions, we present an approach for global stability analysis and obtain new results on some three- and four-dimensional model systems. In addition, we conduct numerical simulation to verify the analytical results.

  1. Phenomenological mass matrices with a democratic warp

    International Nuclear Information System (INIS)

    Kleppe, A.

    2018-01-01

    Taking into account all available data on the mass sector, we obtain unitary rotation matrices that diagonalize the quark matrices by using a specific parametrization of the Cabibbo-Kobayashi-Maskawa mixing matrix. In this way, we find mass matrices for the up- and down-quark sectors of a specific, symmetric form, with traces of a democratic texture.

  2. S-matrices and integrability

    International Nuclear Information System (INIS)

    Bombardelli, Diego

    2016-01-01

    In these notes we review the S-matrix theory in (1+1)-dimensional integrable models, focusing mainly on the relativistic case. Once the main definitions and physical properties are introduced, we discuss the factorization of scattering processes due to integrability. We then focus on the analytic properties of the two-particle scattering amplitude and illustrate the derivation of the S-matrices for all the possible bound states using the so-called bootstrap principle. General algebraic structures underlying the S-matrix theory and its relation with the form factors axioms are briefly mentioned. Finally, we discuss the S-matrices of sine-Gordon and SU (2), SU (3) chiral Gross–Neveu models. (topical review)

  3. Synthesised standards in natural matrices

    International Nuclear Information System (INIS)

    Olsen, D.G.

    1980-01-01

    The problem of securing the most reliable standards for the accurate analysis of radionuclides is discussed in the paper and in the comment on the paper. It is contended in the paper that the best standards can be created by quantitative addition of accurately known spiking solutions into carefully selected natural matrices. On the other hand it is argued that many natural materials can be successfully standardized for numerous trace constituents. Both points of view are supported with examples. (U.K.)

  4. Fast multipole preconditioners for sparse matrices arising from elliptic equations

    KAUST Repository

    Ibeid, Huda

    2017-11-09

    Among optimal hierarchical algorithms for the computational solution of elliptic problems, the fast multipole method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxable global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Here, we do not discuss the well developed applications of FMM to implement matrix-vector multiplications within Krylov solvers of boundary element methods. Instead, we propose using FMM for the volume-to-volume contribution of inhomogeneous Poisson-like problems, where the boundary integral is a small part of the overall computation. Our method may be used to precondition sparse matrices arising from finite difference/element discretizations, and can handle a broader range of scientific applications. It is capable of algebraic convergence rates down to the truncation error of the discretized PDE comparable to those of multigrid methods, and it offers potentially superior multicore and distributed memory scalability properties on commodity architecture supercomputers. Compared with other methods exploiting the low-rank character of off-diagonal blocks of the dense resolvent operator, FMM-preconditioned Krylov iteration may reduce the amount of communication because it is matrix-free and exploits the tree structure of FMM. We describe our tests in reproducible detail with freely available codes and outline directions for further extensibility.

  5. Fast multipole preconditioners for sparse matrices arising from elliptic equations

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Pestana, Jennifer; Keyes, David E.

    2017-01-01

    Among optimal hierarchical algorithms for the computational solution of elliptic problems, the fast multipole method (FMM) stands out for its adaptability to emerging architectures, having high arithmetic intensity, tunable accuracy, and relaxable global synchronization requirements. We demonstrate that, beyond its traditional use as a solver in problems for which explicit free-space kernel representations are available, the FMM has applicability as a preconditioner in finite domain elliptic boundary value problems, by equipping it with boundary integral capability for satisfying conditions at finite boundaries and by wrapping it in a Krylov method for extensibility to more general operators. Here, we do not discuss the well developed applications of FMM to implement matrix-vector multiplications within Krylov solvers of boundary element methods. Instead, we propose using FMM for the volume-to-volume contribution of inhomogeneous Poisson-like problems, where the boundary integral is a small part of the overall computation. Our method may be used to precondition sparse matrices arising from finite difference/element discretizations, and can handle a broader range of scientific applications. It is capable of algebraic convergence rates down to the truncation error of the discretized PDE comparable to those of multigrid methods, and it offers potentially superior multicore and distributed memory scalability properties on commodity architecture supercomputers. Compared with other methods exploiting the low-rank character of off-diagonal blocks of the dense resolvent operator, FMM-preconditioned Krylov iteration may reduce the amount of communication because it is matrix-free and exploits the tree structure of FMM. We describe our tests in reproducible detail with freely available codes and outline directions for further extensibility.

  6. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  7. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  8. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  9. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  10. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  11. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  12. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  13. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  14. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  15. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  16. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  17. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  18. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  19. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  20. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  1. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  2. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  3. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  4. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  5. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  6. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  7. Sparse Matrices in Frame Theory

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Krahmer, Felix; Kutyniok, Gitta

    2014-01-01

    Frame theory is closely intertwined with signal processing through a canon of methodologies for the analysis of signals using (redundant) linear measurements. The canonical dual frame associated with a frame provides a means for reconstruction by a least squares approach, but other dual frames...... yield alternative reconstruction procedures. The novel paradigm of sparsity has recently entered the area of frame theory in various ways. Of those different sparsity perspectives, we will focus on the situations where frames and (not necessarily canonical) dual frames can be written as sparse matrices...

  8. The Inverse of Banded Matrices

    Science.gov (United States)

    2013-01-01

    indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower

  9. Fusion algebra and fusing matrices

    International Nuclear Information System (INIS)

    Gao Yihong; Li Miao; Yu Ming.

    1989-09-01

    We show that the Wilson line operators in topological field theories form a fusion algebra. In general, the fusion algebra is a relation among the fusing (F) matrices. In the case of the SU(2) WZW model, some special F matrix elements are found in this way, and the remaining F matrix elements are then determined up to a sign. In addition, the S(j) modular transformation of the one point blocks on the torus is worked out. Our results are found to agree with those obtained from the quantum group method. (author). 24 refs

  10. Transfer matrices for multilayer structures

    International Nuclear Information System (INIS)

    Baquero, R.

    1988-08-01

    We consider four of the transfer matrices defined to deal with multilayer structures. We deduce algorithms to calculate them numerically, in a simple and neat way. We illustrate their application to semi-infinite systems using SGFM formulae. These algorithms are of fast convergence and allow a calculation of bulk-, surface- and inner-layers band structure in good agreement with much more sophisticated calculations. Supermatrices, interfaces and multilayer structures can be calculated in this way with a small computational effort. (author). 10 refs

  11. Orthogonal polynomials and random matrices

    CERN Document Server

    Deift, Percy

    2000-01-01

    This volume expands on a set of lectures held at the Courant Institute on Riemann-Hilbert problems, orthogonal polynomials, and random matrix theory. The goal of the course was to prove universality for a variety of statistical quantities arising in the theory of random matrix models. The central question was the following: Why do very general ensembles of random n {\\times} n matrices exhibit universal behavior as n {\\rightarrow} {\\infty}? The main ingredient in the proof is the steepest descent method for oscillatory Riemann-Hilbert problems.

  12. Hypercyclic Abelian Semigroups of Matrices on Cn

    International Nuclear Information System (INIS)

    Ayadi, Adlene; Marzougui, Habib

    2010-07-01

    We give a complete characterization of existence of dense orbit for any abelian semigroup of matrices on C n . For finitely generated semigroups, this characterization is explicit and is used to determine the minimal number of matrices in normal form over C which forms a hypercyclic abelian semigroup on C n . In particular, we show that no abelian semigroup generated by n matrices on C n can be hypercyclic. (author)

  13. Lambda-matrices and vibrating systems

    CERN Document Server

    Lancaster, Peter; Stark, M; Kahane, J P

    1966-01-01

    Lambda-Matrices and Vibrating Systems presents aspects and solutions to problems concerned with linear vibrating systems with a finite degrees of freedom and the theory of matrices. The book discusses some parts of the theory of matrices that will account for the solutions of the problems. The text starts with an outline of matrix theory, and some theorems are proved. The Jordan canonical form is also applied to understand the structure of square matrices. Classical theorems are discussed further by applying the Jordan canonical form, the Rayleigh quotient, and simple matrix pencils with late

  14. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  15. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  16. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  18. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  19. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  20. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  1. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  2. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  3. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  4. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  5. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  6. Pathological rate matrices: from primates to pathogens

    Directory of Open Access Journals (Sweden)

    Knight Rob

    2008-12-01

    Full Text Available Abstract Background Continuous-time Markov models allow flexible, parametrically succinct descriptions of sequence divergence. Non-reversible forms of these models are more biologically realistic but are challenging to develop. The instantaneous rate matrices defined for these models are typically transformed into substitution probability matrices using a matrix exponentiation algorithm that employs eigendecomposition, but this algorithm has characteristic vulnerabilities that lead to significant errors when a rate matrix possesses certain 'pathological' properties. Here we tested whether pathological rate matrices exist in nature, and consider the suitability of different algorithms to their computation. Results We used concatenated protein coding gene alignments from microbial genomes, primate genomes and independent intron alignments from primate genomes. The Taylor series expansion and eigendecomposition matrix exponentiation algorithms were compared to the less widely employed, but more robust, Padé with scaling and squaring algorithm for nucleotide, dinucleotide, codon and trinucleotide rate matrices. Pathological dinucleotide and trinucleotide matrices were evident in the microbial data set, affecting the eigendecomposition and Taylor algorithms respectively. Even using a conservative estimate of matrix error (occurrence of an invalid probability, both Taylor and eigendecomposition algorithms exhibited substantial error rates: ~100% of all exonic trinucleotide matrices were pathological to the Taylor algorithm while ~10% of codon positions 1 and 2 dinucleotide matrices and intronic trinucleotide matrices, and ~30% of codon matrices were pathological to eigendecomposition. The majority of Taylor algorithm errors derived from occurrence of multiple unobserved states. A small number of negative probabilities were detected from the Pad�� algorithm on trinucleotide matrices that were attributable to machine precision. Although the Pad

  7. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  8. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  9. Quantum Hilbert matrices and orthogonal polynomials

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Berg, Christian

    2009-01-01

    Using the notion of quantum integers associated with a complex number q≠0 , we define the quantum Hilbert matrix and various extensions. They are Hankel matrices corresponding to certain little q -Jacobi polynomials when |q|<1 , and for the special value they are closely related to Hankel matrice...

  10. The construction of factorized S-matrices

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.

    1981-01-01

    We study the relationships between factorized S-matrices given as representations of the Zamolodchikov algebra and exactly solvable models constructed using the Baxter method. Several new examples of symmetric and non-symmetric factorized S-matrices are proposed. (orig.)

  11. Skew-adjacency matrices of graphs

    NARCIS (Netherlands)

    Cavers, M.; Cioaba, S.M.; Fallat, S.; Gregory, D.A.; Haemers, W.H.; Kirkland, S.J.; McDonald, J.J.; Tsatsomeros, M.

    2012-01-01

    The spectra of the skew-adjacency matrices of a graph are considered as a possible way to distinguish adjacency cospectral graphs. This leads to the following topics: graphs whose skew-adjacency matrices are all cospectral; relations between the matchings polynomial of a graph and the characteristic

  12. On Investigating GMRES Convergence using Unitary Matrices

    Czech Academy of Sciences Publication Activity Database

    Duintjer Tebbens, Jurjen; Meurant, G.; Sadok, H.; Strakoš, Z.

    2014-01-01

    Roč. 450, 1 June (2014), s. 83-107 ISSN 0024-3795 Grant - others:GA AV ČR(CZ) M100301201; GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : GMRES convergence * unitary matrices * unitary spectra * normal matrices * Krylov residual subspace * Schur parameters Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014

  13. Exact Inverse Matrices of Fermat and Mersenne Circulant Matrix

    Directory of Open Access Journals (Sweden)

    Yanpeng Zheng

    2015-01-01

    Full Text Available The well known circulant matrices are applied to solve networked systems. In this paper, circulant and left circulant matrices with the Fermat and Mersenne numbers are considered. The nonsingularity of these special matrices is discussed. Meanwhile, the exact determinants and inverse matrices of these special matrices are presented.

  14. Comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering.

    Science.gov (United States)

    Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan

    2017-03-01

    Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.

  15. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Directory of Open Access Journals (Sweden)

    Marco Congedo

    Full Text Available We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD matrices and their approximate joint diagonalization (AJD. Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  16. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Science.gov (United States)

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2014-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  17. Community Detection for Correlation Matrices

    Directory of Open Access Journals (Sweden)

    Mel MacMahon

    2015-04-01

    Full Text Available A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with “hard” cores and “soft” peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect “soft stocks” that alternate between communities; and discuss implications for portfolio optimization and risk management.

  18. Community Detection for Correlation Matrices

    Science.gov (United States)

    MacMahon, Mel; Garlaschelli, Diego

    2015-04-01

    A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.

  19. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  20. The Antitriangular Factorization of Saddle Point Matrices

    KAUST Repository

    Pestana, J.

    2014-01-01

    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173-196] recently introduced the block antitriangular ("Batman") decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle point matrices and demonstrate how it represents the common nullspace method. We show that rank-1 updates to the saddle point matrix can be easily incorporated into the factorization and give bounds on the eigenvalues of matrices important in saddle point theory. We show the relation of this factorization to constraint preconditioning and how it transforms but preserves the structure of block diagonal and block triangular preconditioners. © 2014 Society for Industrial and Applied Mathematics.

  1. Polynomial sequences generated by infinite Hessenberg matrices

    Directory of Open Access Journals (Sweden)

    Verde-Star Luis

    2017-01-01

    Full Text Available We show that an infinite lower Hessenberg matrix generates polynomial sequences that correspond to the rows of infinite lower triangular invertible matrices. Orthogonal polynomial sequences are obtained when the Hessenberg matrix is tridiagonal. We study properties of the polynomial sequences and their corresponding matrices which are related to recurrence relations, companion matrices, matrix similarity, construction algorithms, and generating functions. When the Hessenberg matrix is also Toeplitz the polynomial sequences turn out to be of interpolatory type and we obtain additional results. For example, we show that every nonderogative finite square matrix is similar to a unique Toeplitz-Hessenberg matrix.

  2. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  3. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  4. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  5. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  6. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  7. A novel algorithm of artificial immune system for high-dimensional function numerical optimization

    Institute of Scientific and Technical Information of China (English)

    DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen

    2005-01-01

    Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.

  8. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  9. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  10. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  11. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    Science.gov (United States)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  12. Computer-aided diagnosis for phase-contrast X-ray computed tomography: quantitative characterization of human patellar cartilage with high-dimensional geometric features.

    Science.gov (United States)

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismüller, Axel

    2014-02-01

    Phase-contrast computed tomography (PCI-CT) has shown tremendous potential as an imaging modality for visualizing human cartilage with high spatial resolution. Previous studies have demonstrated the ability of PCI-CT to visualize (1) structural details of the human patellar cartilage matrix and (2) changes to chondrocyte organization induced by osteoarthritis. This study investigates the use of high-dimensional geometric features in characterizing such chondrocyte patterns in the presence or absence of osteoarthritic damage. Geometrical features derived from the scaling index method (SIM) and statistical features derived from gray-level co-occurrence matrices were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic curve (AUC). SIM-derived geometrical features exhibited the best classification performance (AUC, 0.95 ± 0.06) and were most robust to changes in ROI size. These results suggest that such geometrical features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix in an automated and non-subjective manner, while also enabling classification of cartilage as healthy or osteoarthritic with high accuracy. Such features could potentially serve as imaging markers for evaluating osteoarthritis progression and its response to different therapeutic intervention strategies.

  13. Synchronous correlation matrices and Connes’ embedding conjecture

    Energy Technology Data Exchange (ETDEWEB)

    Dykema, Kenneth J., E-mail: kdykema@math.tamu.edu [Department of Mathematics, Texas A& M University, College Station, Texas 77843-3368 (United States); Paulsen, Vern, E-mail: vern@math.uh.edu [Department of Mathematics, University of Houston, Houston, Texas 77204 (United States)

    2016-01-15

    In the work of Paulsen et al. [J. Funct. Anal. (in press); preprint arXiv:1407.6918], the concept of synchronous quantum correlation matrices was introduced and these were shown to correspond to traces on certain C*-algebras. In particular, synchronous correlation matrices arose in their study of various versions of quantum chromatic numbers of graphs and other quantum versions of graph theoretic parameters. In this paper, we develop these ideas further, focusing on the relations between synchronous correlation matrices and microstates. We prove that Connes’ embedding conjecture is equivalent to the equality of two families of synchronous quantum correlation matrices. We prove that if Connes’ embedding conjecture has a positive answer, then the tracial rank and projective rank are equal for every graph. We then apply these results to more general non-local games.

  14. Discrete canonical transforms that are Hadamard matrices

    International Nuclear Information System (INIS)

    Healy, John J; Wolf, Kurt Bernardo

    2011-01-01

    The group Sp(2,R) of symplectic linear canonical transformations has an integral kernel which has quadratic and linear phases, and which is realized by the geometric paraxial optical model. The discrete counterpart of this model is a finite Hamiltonian system that acts on N-point signals through N x N matrices whose elements also have a constant absolute value, although they do not form a representation of that group. Those matrices that are also unitary are Hadamard matrices. We investigate the manifolds of these N x N matrices under the Sp(2,R) equivalence imposed by the model, and find them to be on two-sided cosets. By means of an algorithm we determine representatives that lead to collections of mutually unbiased bases.

  15. The Antitriangular Factorization of Saddle Point Matrices

    KAUST Repository

    Pestana, J.; Wathen, A. J.

    2014-01-01

    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173-196] recently introduced the block antitriangular ("Batman") decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle

  16. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  17. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  18. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  19. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  20. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  1. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  2. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  3. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  4. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  5. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  6. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  7. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  9. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  10. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  11. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  12. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  13. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  14. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  15. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  16. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  17. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  18. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  19. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  20. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  1. Flux Jacobian Matrices For Equilibrium Real Gases

    Science.gov (United States)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  2. Supercritical fluid extraction behaviour of polymer matrices

    International Nuclear Information System (INIS)

    Sujatha, K.; Kumar, R.; Sivaraman, N.; Srinivasan, T.G.; Vasudeva Rao, P.R.

    2007-01-01

    Organic compounds present in polymeric matrices such as neoprene, surgical gloves and PVC were co-extracted during the removal of uranium using supercritical fluid extraction (SFE) technique. Hence SFE studies of these matrices were carried out to establish the extracted species using HPLC, IR and mass spectrometry techniques. The initial study indicated that uranium present in the extract could be purified from the co-extracted organic species. (author)

  3. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  4. High-Dimensional Analysis of Convex Optimization-Based Massive MIMO Decoders

    KAUST Repository

    Ben Atitallah, Ismail

    2017-04-01

    A wide range of modern large-scale systems relies on recovering a signal from noisy linear measurements. In many applications, the useful signal has inherent properties, such as sparsity, low-rankness, or boundedness, and making use of these properties and structures allow a more efficient recovery. Hence, a significant amount of work has been dedicated to developing and analyzing algorithms that can take advantage of the signal structure. Especially, since the advent of Compressed Sensing (CS) there has been significant progress towards this direction. Generally speaking, the signal structure can be harnessed by solving an appropriate regularized or constrained M-estimator. In modern Multi-input Multi-output (MIMO) communication systems, all transmitted signals are drawn from finite constellations and are thus bounded. Besides, most recent modulation schemes such as Generalized Space Shift Keying (GSSK) or Generalized Spatial Modulation (GSM) yield signals that are inherently sparse. In the recovery procedure, boundedness and sparsity can be promoted by using the ℓ1 norm regularization and by imposing an ℓ∞ norm constraint respectively. In this thesis, we propose novel optimization algorithms to recover certain classes of structured signals with emphasis on MIMO communication systems. The exact analysis permits a clear characterization of how well these systems perform. Also, it allows an automatic tuning of the parameters. In each context, we define the appropriate performance metrics and we analyze them exactly in the High Dimentional Regime (HDR). The framework we use for the analysis is based on Gaussian process inequalities; in particular, on a new strong and tight version of a classical comparison inequality (due to Gordon, 1988) in the presence of additional convexity assumptions. The new framework that emerged from this inequality is coined as Convex Gaussian Min-max Theorem (CGMT).

  5. Protein matrices for wound dressings =

    Science.gov (United States)

    Vasconcelos, Andreia Joana Costa

    Fibrous proteins such as silk fibroin (SF), keratin (K) and elastin (EL) are able to mimic the extracellular matrix (ECM) that allows their recognition under physiological conditions. The impressive mechanical properties, the environmental stability, in combination with their biocompatibility and control of morphology, provide an important basis to use these proteins in biomedical applications like protein-based wound dressings. Along time the concept of wound dressings has changed from the traditional dressings such as honey or natural fibres, used just to protect the wound from external factors, to the interactive dressings of the present. Wounds can be classified in acute that heal in the expected time frame, and chronic, which fail to heal because the orderly sequence of events is disrupted at one or more stages of the healing process. Moreover, chronic wound exudates contain high levels of tissue destructive proteolytic enzymes such as human neutrophil elastase (HNE) that need to be controlled for a proper healing. The aim of this work is to exploit the self-assemble properties of silk fibroin, keratin and elastin for the development of new protein materials to be used as wound dressings: i) evaluation of the blending effect on the physical and chemical properties of the materials; ii) development of materials with different morphologies; iii) assessment of the cytocompatibility of the protein matrices; iv) ultimately, study the ability of the developed protein matrices as wound dressings through the use of human chronic wound exudate; v) use of innovative short peptide sequences that allow to target the control of high levels of HNE found on chronic wounds. Chapter III reports the preparation of silk fibroin/keratin (SF/K) blend films by solvent casting evaporation. Two solvent systems, aqueous and acidic, were used for the preparation of films from fibroin and keratin extracted from the respective silk and wool fibres. The effect of solvent system used was

  6. High dimensional biological data retrieval optimization with NoSQL technology

    Science.gov (United States)

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data

  7. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  8. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  9. High dimensional biological data retrieval optimization with NoSQL technology.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating

  10. MERSENNE AND HADAMARD MATRICES CALCULATION BY SCARPIS METHOD

    Directory of Open Access Journals (Sweden)

    N. A. Balonin

    2014-05-01

    Full Text Available Purpose. The paper deals with the problem of basic generalizations of Hadamard matrices associated with maximum determinant matrices or not optimal by determinant matrices with orthogonal columns (weighing matrices, Mersenne and Euler matrices, ets.; calculation methods for the quasi-orthogonal local maximum determinant Mersenne matrices are not studied enough sufficiently. The goal of this paper is to develop the theory of Mersenne and Hadamard matrices on the base of generalized Scarpis method research. Methods. Extreme solutions are found in general by minimization of maximum for absolute values of the elements of studied matrices followed by their subsequent classification according to the quantity of levels and their values depending on orders. Less universal but more effective methods are based on structural invariants of quasi-orthogonal matrices (Silvester, Paley, Scarpis methods, ets.. Results. Generalizations of Hadamard and Belevitch matrices as a family of quasi-orthogonal matrices of odd orders are observed; they include, in particular, two-level Mersenne matrices. Definitions of section and layer on the set of generalized matrices are proposed. Calculation algorithms for matrices of adjacent layers and sections by matrices of lower orders are described. Approximation examples of the Belevitch matrix structures up to 22-nd critical order by Mersenne matrix of the third order are given. New formulation of the modified Scarpis method to approximate Hadamard matrices of high orders by lower order Mersenne matrices is proposed. Williamson method is described by example of one modular level matrices approximation by matrices with a small number of levels. Practical relevance. The efficiency of developing direction for the band-pass filters creation is justified. Algorithms for Mersenne matrices design by Scarpis method are used in developing software of the research program complex. Mersenne filters are based on the suboptimal by

  11. A Brief Historical Introduction to Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  12. Structure and stability of genetic variance-covariance matrices: A Bayesian sparse factor analysis of transcriptional variation in the three-spined stickleback.

    Science.gov (United States)

    Siren, J; Ovaskainen, O; Merilä, J

    2017-10-01

    The genetic variance-covariance matrix (G) is a quantity of central importance in evolutionary biology due to its influence on the rate and direction of multivariate evolution. However, the predictive power of empirically estimated G-matrices is limited for two reasons. First, phenotypes are high-dimensional, whereas traditional statistical methods are tuned to estimate and analyse low-dimensional matrices. Second, the stability of G to environmental effects and over time remains poorly understood. Using Bayesian sparse factor analysis (BSFG) designed to estimate high-dimensional G-matrices, we analysed levels variation and covariation in 10,527 expressed genes in a large (n = 563) half-sib breeding design of three-spined sticklebacks subject to two temperature treatments. We found significant differences in the structure of G between the treatments: heritabilities and evolvabilities were higher in the warm than in the low-temperature treatment, suggesting more and faster opportunity to evolve in warm (stressful) conditions. Furthermore, comparison of G and its phenotypic equivalent P revealed the latter is a poor substitute of the former. Most strikingly, the results suggest that the expected impact of G on evolvability-as well as the similarity among G-matrices-may depend strongly on the number of traits included into analyses. In our results, the inclusion of only few traits in the analyses leads to underestimation in the differences between the G-matrices and their predicted impacts on evolution. While the results highlight the challenges involved in estimating G, they also illustrate that by enabling the estimation of large G-matrices, the BSFG method can improve predicted evolutionary responses to selection. © 2017 John Wiley & Sons Ltd.

  13. Penalized estimation for competing risks regression with applications to high-dimensional covariates

    DEFF Research Database (Denmark)

    Ambrogi, Federico; Scheike, Thomas H.

    2016-01-01

    of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...

  14. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  15. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    Energy Technology Data Exchange (ETDEWEB)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)

    2010-02-14

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  16. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    International Nuclear Information System (INIS)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail

    2010-01-01

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  17. Time–energy high-dimensional one-side device-independent quantum key distribution

    International Nuclear Information System (INIS)

    Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei

    2017-01-01

    Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)

  18. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  19. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  20. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  1. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  2. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  3. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  6. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  7. Quantum Entanglement and Reduced Density Matrices

    Science.gov (United States)

    Purwanto, Agus; Sukamto, Heru; Yuwana, Lila

    2018-05-01

    We investigate entanglement and separability criteria of multipartite (n-partite) state by examining ranks of its reduced density matrices. Firstly, we construct the general formula to determine the criterion. A rank of origin density matrix always equals one, meanwhile ranks of reduced matrices have various ranks. Next, separability and entanglement criterion of multipartite is determined by calculating ranks of reduced density matrices. In this article we diversify multipartite state criteria into completely entangled state, completely separable state, and compound state, i.e. sub-entangled state and sub-entangledseparable state. Furthermore, we also shorten the calculation proposed by the previous research to determine separability of multipartite state and expand the methods to be able to differ multipartite state based on criteria above.

  8. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...

  9. Advanced incomplete factorization algorithms for Stiltijes matrices

    Energy Technology Data Exchange (ETDEWEB)

    Il`in, V.P. [Siberian Division RAS, Novosibirsk (Russian Federation)

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  10. Wishart and anti-Wishart random matrices

    International Nuclear Information System (INIS)

    Janik, Romuald A; Nowak, Maciej A

    2003-01-01

    We provide a compact exact representation for the distribution of the matrix elements of the Wishart-type random matrices A † A, for any finite number of rows and columns of A, without any large N approximations. In particular, we treat the case when the Wishart-type random matrix contains redundant, non-random information, which is a new result. This representation is of interest for a procedure for reconstructing the redundant information hidden in Wishart matrices, with potential applications to numerous models based on biological, social and artificial intelligence networks

  11. Topological expansion of the chain of matrices

    International Nuclear Information System (INIS)

    Eynard, B.; Ferrer, A. Prats

    2009-01-01

    We solve the loop equations to all orders in 1/N 2 , for the Chain of Matrices matrix model (with possibly an external field coupled to the last matrix of the chain). We show that the topological expansion of the free energy, is, like for the 1 and 2-matrix model, given by the symplectic invariants of [19]. As a consequence, we find the double scaling limit explicitly, and we discuss modular properties, large N asymptotics. We also briefly discuss the limit of an infinite chain of matrices (matrix quantum mechanics).

  12. Partitioning sparse rectangular matrices for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, T.G.

    1998-05-01

    The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.

  13. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  14. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  15. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  16. Application of H-matrices method to the calculation of the stress field in a viscoelastic medium

    Science.gov (United States)

    Ohtani, M.; Hirahara, K.

    2017-12-01

    In SW Japan, the Philippine Sea plate subducts from the south and the large earthquakes around M (Magnitude) 8 repeatedly occur at the plate boundary along the Nankai Trough, called as Nankai/Tonankai earthquakes. Near the rupture area of these earthquakes, the active volcanoes lines in the Kyushu region SW Japan, such as Sakurajima volcano. There are also distributed in the Tokai-Kanto region SE Japan, such as Mt. Fuji. The eruption of Mt. Fuji in 1707, called as Hoei eruption, have occurred 49 days after the one of the series of Nankai/Tonankai earthquakes, 1707 Hoei earthquake (M8.4). It suggests that the stress field due to the earthquake sometimes helps the volcanoes to erupt. When we consider the stress change due to the earthquake, the effect of viscoelastic deformation of the crust will be important. FEM is always used for modeling such inelastic effect. However, it requires the high computational cost of O(N3), where N is the number of discretized cells of the inelastic medium. Recently, a new method based on BIEM is proposed by Barbot and Fialko (2010). In their method, calculation of the stress field due to the inelastic strain is replaced to solve the inhomogeneous Navier's equation with equivalent body forces of the inelastic strain. Then, using the stress-strain greenfunction in an elastic medium, we can take into account the inelastic effect. In this study, we employ their method to evaluate the stress change at the active volcanoes around the Nankai/Tonankai earthquakes. Their method requires the computational cost and memory storage of O(N2). We try to reduce the computational amount and the memory by applying the fast computation method of H-matrices method. With H-matrices method, a dense matrix is divided into hierarchical structure of submatrices, and each submatrix is approximated to be low rank. When we divide the viscoelastic medium into N = 8,640 or 69,120 uniform cuboid cells and apply the H-matrices method, the required storage memory for

  17. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying

    2015-01-01

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  18. Theoretical origin of quark mass matrices

    International Nuclear Information System (INIS)

    Mohapatra, R.N.

    1987-01-01

    This paper presents the theoretical origin of specific quark mass matrices in the grand unified theories. The author discusses the first natural derivation of the Stech-type mass matrix in unified gauge theories. A solution to the strong CP-problem is provided

  19. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  20. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2013-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and

  1. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    Lasserre, J.B.; Laurent, M.; Mourrain, B.; Rostalski, P.; Trébuchet, P.

    2013-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming its complex (resp. real) variety is finite. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-definite

  2. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2011-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and

  3. Malware analysis using visualized image matrices.

    Science.gov (United States)

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  4. Generation speed in Raven's Progressive Matrices Test

    NARCIS (Netherlands)

    Verguts, T.; Boeck, P. De; Maris, E.G.G.

    1999-01-01

    In this paper, we investigate the role of response fluency on a well-known intelligence test, Raven's (1962) Advanced Progressive Matrices (APM) test. Critical in solving this test is finding rules that govern the items. Response fluency is conceptualized as generation speed or the speed at which a

  5. Inversion of General Cyclic Heptadiagonal Matrices

    Directory of Open Access Journals (Sweden)

    A. A. Karawia

    2013-01-01

    Full Text Available We describe a reliable symbolic computational algorithm for inverting general cyclic heptadiagonal matrices by using parallel computing along with recursion. The computational cost of it is operations. The algorithm is implementable to the Computer Algebra System (CAS such as MAPLE, MATLAB, and MATHEMATICA. Two examples are presented for the sake of illustration.

  6. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  7. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    Science.gov (United States)

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  8. Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis

    Directory of Open Access Journals (Sweden)

    Yuanzhang eLi

    2015-05-01

    Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.

  9. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  10. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  11. Computational Strategies for Dissecting the High-Dimensional Complexity of Adaptive Immune Repertoires

    Directory of Open Access Journals (Sweden)

    Enkelejda Miho

    2018-02-01

    Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.

  12. Construction of high-dimensional neural network potentials using environment-dependent atom pairs.

    Science.gov (United States)

    Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg

    2012-05-21

    An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.

  13. Two-Sample Tests for High-Dimensional Linear Regression with an Application to Detecting Interactions.

    Science.gov (United States)

    Xia, Yin; Cai, Tianxi; Cai, T Tony

    2018-01-01

    Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.

  14. Individual-based models for adaptive diversification in high-dimensional phenotype spaces.

    Science.gov (United States)

    Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael

    2016-02-07

    Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  16. Exploration of High-Dimensional Scalar Function for Nuclear Reactor Safety Analysis and Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer; Michael Pernice; Robert Nourgaliev

    2013-05-01

    The next generation of methodologies for nuclear reactor Probabilistic Risk Assessment (PRA) explicitly accounts for the time element in modeling the probabilistic system evolution and uses numerical simulation tools to account for possible dependencies between failure events. The Monte-Carlo (MC) and the Dynamic Event Tree (DET) approaches belong to this new class of dynamic PRA methodologies. A challenge of dynamic PRA algorithms is the large amount of data they produce which may be difficult to visualize and analyze in order to extract useful information. We present a software tool that is designed to address these goals. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations. We provide a user’s guide to our software tool by highlighting its analysis and visualization capabilities, along with a use case involving dataset from a nuclear reactor safety simulation.

  17. High-dimensional neural network potentials for solvation: The case of protonated water clusters in helium

    Science.gov (United States)

    Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik

    2018-03-01

    The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.

  18. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  19. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  20. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.