Accurate low-rank matrix recovery from a small number of linear measurements
Candes, Emmanuel J
2009-01-01
We consider the problem of recovering a lowrank matrix M from a small number of random linear measurements. A popular and useful example of this problem is matrix completion, in which the measurements reveal the values of a subset of the entries, and we wish to fill in the missing entries (this is the famous Netflix problem). When M is believed to have low rank, one would ideally try to recover M by finding the minimum-rank matrix that is consistent with the data; this is, however, problematic since this is a nonconvex problem that is, generally, intractable. Nuclear-norm minimization has been proposed as a tractable approach, and past papers have delved into the theoretical properties of nuclear-norm minimization algorithms, establishing conditions under which minimizing the nuclear norm yields the minimum rank solution. We review this spring of emerging literature and extend and refine previous theoretical results. Our focus is on providing error bounds when M is well approximated by a low-rank matrix, and ...
Sparsity tracking for low rank matrix recovery from noise
Deng, Yue; Zhang, Zengke
2010-01-01
Rank-based analysis is a basic approach for many real world applications. Recently, with the developments of compressive sensing, an interesting problem was proposed to recover a lowrank matrix from sparse noise. In this paper, we will address this problem and propose a low rank matrix recovery algorithm based on sparsity tacking. The core of the proposed Sparsity Tracking Recovery(STR) is a heuristic kernel, which is introduced to penalize the noise distribution. With the heuristic method, the sparse entries in the noise matrix can be accurately tracked and discouraged to be zero. Compared with the state-of-the-art algorithm, STR could handle many tough problems and its feasible region is much larger. Besides, if the recovered rank of the matrix is low enough, it can even cope with non-sparse noise distribution.
Jacobi-Davidson method on low-rank matrix manifolds
Rakhuba, Maxim; Oseledets, Ivan
2017-01-01
In this work we generalize the Jacobi-Davidson method to the case when eigenvector can be reshaped into a low-rank matrix. In this setting the proposed method inherits advantages of the original Jacobi-Davidson method, has lower complexity and requires less storage. We also introduce low-rank version of the Rayleigh quotient iteration which naturally arises in the Jacobi-Davidson method.
Chen, Shuhang; Liu, Huafeng; Hu, Zhenghui; Zhang, Heye; Shi, Pengcheng; Chen, Yunmei
2015-07-01
Although of great clinical value, accurate and robust reconstruction and segmentation of dynamic positron emission tomography (PET) images are great challenges due to low spatial resolution and high noise. In this paper, we propose a unified framework that exploits temporal correlations and variations within image sequences based on low-rank and sparse matrix decomposition. Thus, the two separate inverse problems, PET image reconstruction and segmentation, are accomplished in a simultaneous fashion. Considering low signal to noise ratio and piece-wise constant assumption of PET images, we also propose to regularize low-rank and sparse matrices with vectorial total variation norm. The resulting optimization problem is solved by augmented Lagrangian multiplier method with variable splitting. The effectiveness of proposed approach is validated on realistic Monte Carlo simulation datasets and the real patient data.
Low-rank Matrix Completion with Noisy Observations: a Quantitative Comparison
Keshavan, Raghunandan H; Oh, Sewoong
2009-01-01
We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.
Exactly Recovering Low-Rank Matrix in Linear Time via $l_1$ Filter
Liu, Risheng; Su, Zhixun
2011-01-01
Recovering a low rank matrix from corrupted data, which is known as Robust PCA, has attracted considerable interests in recent years. This problem can be exactly solved by a combined nuclear norm and $l_1$ norm minimization. However, due to the computational burden of SVD inherent with the nuclear norm minimization, traditional methods suffer from high computational complexity, especially for large scale datasets. In this paper, inspired by digital filtering idea in image processing, we propose a novel algorithm, named $l_1$ Filter, for solving Robust PCA with linear cost. The $l_1$ Filter is defined by a seed, which is a exactly recovered small submatrix of the underlying low rank matrix. By solving some $l_1$ minimization problems in parallel, the full low rank matrix can be exactly recovered from corrupted observations with linear cost. Both theoretical analysis and experimental results exhibit that our method is an efficient way to exactly recovering low rank matrix in linear time.
Annihilating Filter-Based Low-Rank Hankel Matrix Approach for Image Inpainting.
Jin, Kyong Hwan; Ye, Jong Chul
2015-11-01
In this paper, we propose a patch-based image inpainting method using a low-rank Hankel structured matrix completion approach. The proposed method exploits the annihilation property between a shift-invariant filter and image data observed in many existing inpainting algorithms. In particular, by exploiting the commutative property of the convolution, the annihilation property results in a low-rank block Hankel structure data matrix, and the image inpainting problem becomes a low-rank structured matrix completion problem. The block Hankel structured matrices are obtained patch-by-patch to adapt to the local changes in the image statistics. To solve the structured low-rank matrix completion problem, we employ an alternating direction method of multipliers with factorization matrix initialization using the low-rank matrix fitting algorithm. As a side product of the matrix factorization, locally adaptive dictionaries can be also easily constructed. Despite the simplicity of the algorithm, the experimental results using irregularly subsampled images as well as various images with globally missing patterns showed that the proposed method outperforms existing state-of-the-art image inpainting methods.
Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka
2017-07-01
This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
A Class of Weighted Low Rank Approximation of the Positive Semidefinite Hankel Matrix
Directory of Open Access Journals (Sweden)
Jianchao Bai
2015-01-01
Full Text Available We consider the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, we firstly transform the problem into an unconstrained optimization problem and then use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem. Numerical examples illustrate that the new method is feasible and effective.
Reconstruction of a Low-rank Matrix in the Presence of Gaussian Noise
Shabalin, Andrey
2010-01-01
In this paper we study the problem of reconstruction of a low-rank matrix observed with additive Gaussian noise. First we show that under mild assumptions (about the prior distribution of the signal matrix) we can restrict our attention to reconstruction methods that are based on the singular value decomposition of the observed matrix and act only on its singular values (preserving the singular vectors). Then we determine the effect of noise on the SVD of low-rank matrices by building a connection between matrix reconstruction problem and spiked population model in random matrix theory. Based on this knowledge, we propose a new reconstruction method, called RMT, that is designed to reverse the effect of the noise on the singular values of the signal matrix and adjust for its effect on the singular vectors. With an extensive simulation study we show that the proposed method outperform even oracle versions of both soft and hard thresholding methods and closely matches the performance of a general oracle scheme.
Multi-shot multi-channel diffusion data recovery using structured low-rank matrix completion
Mani, Merry; Kelley, Douglas; Magnotta, Vincent
2016-01-01
Purpose: To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods: Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion- induced phase maps to recover the unaliased images. In the new formulation, the k-space data of the unaliased DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of fully-sampled as well as under-sampled MS-DW ...
Low-Rank Positive Semidefinite Matrix Recovery From Corrupted Rank-One Measurements
Li, Yuanxin; Sun, Yue; Chi, Yuejie
2017-01-01
We study the problem of estimating a low-rank positive semidefinite (PSD) matrix from a set of rank-one measurements using sensing vectors composed of i.i.d. standard Gaussian entries, which are possibly corrupted by arbitrary outliers. This problem arises from applications such as phase retrieval, covariance sketching, quantum space tomography, and power spectrum estimation. We first propose a convex optimization algorithm that seeks the PSD matrix with the minimum $\\ell_1$-norm of the observation residual. The advantage of our algorithm is that it is free of parameters, therefore eliminating the need for tuning parameters and allowing easy implementations. We establish that with high probability, a low-rank PSD matrix can be exactly recovered as soon as the number of measurements is large enough, even when a fraction of the measurements are corrupted by outliers with arbitrary magnitudes. Moreover, the recovery is also stable against bounded noise. With the additional information of an upper bound of the rank of the PSD matrix, we propose another non-convex algorithm based on subgradient descent that demonstrates excellent empirical performance in terms of computational efficiency and accuracy.
Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2016-06-01
Full Text Available In this paper, we propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
Saunderson, James; Parrilo, Pablo A; Willsky, Alan S
2012-01-01
In this paper we establish links between, and new results for, three problems that are not usually considered together. The first is a matrix decomposition problem that arises in areas such as statistical modeling and signal processing: given a matrix $X$ formed as the sum of an unknown diagonal matrix and an unknown low rank positive semidefinite matrix, decompose $X$ into these constituents. The second problem we consider is to determine the facial structure of the set of correlation matrices, a convex set also known as the elliptope. This convex body, and particularly its facial structure, plays a role in applications from combinatorial optimization to mathematical finance. The third problem is a basic geometric question: given points $v_1,v_2,...,v_n\\in \\R^k$ (where $n > k$) determine whether there is a centered ellipsoid passing \\emph{exactly} through all of the points. We show that in a precise sense these three problems are equivalent. Furthermore we establish a simple sufficient condition on a subspac...
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2016-07-26
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
DEFF Research Database (Denmark)
Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano;
2014-01-01
We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
Energy Technology Data Exchange (ETDEWEB)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan; Ringenburg, Michael, F.; Chhugani, Jatin; Racah, Evan; Singh, Mohitdeep; Yao, Yushu; Fischer, Curt; Ruebel, Oliver; Bowen, Benjamin; Lewis, Norman, G.; Mahoney, Michael, W.; Krishnamurthy, Venkat; Prabhat, Mr
2017-07-27
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with the fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.
Low rank Multivariate regression
Giraud, Christophe
2010-01-01
We consider in this paper the multivariate regression problem, when the target regression matrix $A$ is close to a low rank matrix. Our primary interest in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a non-asymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of the Ky-Fan quasi-norm of some random matrices. These quantities can be evaluated easily in practice and upper-bounds can be derived from recent results in random matrix theory.
Nuclear norm penalization and optimal rates for noisy low rank matrix completion
Koltchinskii, Vladimir; Lounici, Karim
2010-01-01
This paper deals with the trace regression model where $n$ entries or linear combinations of entries of an unknown $m_1\\times m_2$ matrix $A_0$ corrupted by noise are observed. We propose a new nuclear norm penalized estimator of $A_0$ and establish a general sharp oracle inequality for this estimator for arbitrary values of $n,m_1,m_2$ under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting $m_1m_2\\gg n$. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix $A_0$, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recove...
Optimal spectral norm rates for noisy low-rank matrix completion
Lounici, Karim
2011-01-01
In this paper we consider the trace regression model where $n$ entries or linear combinations of entries of an unknown $m_1\\times m_2$ matrix $A_0$ corrupted by noise are observed. We establish for the nuclear-norm penalized estimator of $A_0$ introduced in \\cite{KLT} a general sharp oracle inequality with the spectral norm for arbitrary values of $n,m_1,m_2$ under an incoherence condition on the sampling distribution $\\Pi$ of the observed entries. Then, we apply this method to the matrix completion problem. In this case, we prove that it satisfies an optimal oracle inequality for the spectral norm, thus improving upon the only existing result \\cite{KLT} concerning the spectral norm, which assumes that the sampling distribution is uniform. Note that our result is valid, in particular, in the high-dimensional setting $m_1m_2\\gg n$. Finally we show that the obtained rate is optimal up to logarithmic factors in a minimax sense.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Cai, Jian-Feng; Gao, Hao; Jiang, Steve B; Shen, Zuowei; Zhao, Hongkai
2012-01-01
Respiration-correlated CBCT, commonly called 4DCBCT, provide respiratory phase-resolved CBCT images. In many clinical applications, it is more preferable to reconstruct true 4DCBCT with the 4th dimension being time, i.e., each CBCT image is reconstructed based on the corresponding instantaneous projection. We propose in this work a novel algorithm for the reconstruction of this truly time-resolved CBCT, called cine-CBCT, by effectively utilizing the underlying temporal coherence, such as periodicity or repetition, in those cine-CBCT images. Assuming each column of the matrix $\\bm{U}$ represents a CBCT image to be reconstructed and the total number of columns is the same as the number of projections, the central idea of our algorithm is that the rank of $\\bm{U}$ is much smaller than the number of projections and we can use a matrix factorization form $\\bm{U}=\\bm{L}\\bm{R}$ for $\\bm{U}$. The number of columns for the matrix $\\bm{L}$ constraints the rank of $\\bm{U}$ and hence implicitly imposing a temporal cohere...
Directory of Open Access Journals (Sweden)
Angshul Majumdar
2014-08-01
Full Text Available We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling, processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-09-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method.
Directory of Open Access Journals (Sweden)
Donatien Sabushimike
2016-09-01
Full Text Available The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM model can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA, and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method.
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Energy Technology Data Exchange (ETDEWEB)
Weber, G. F.; Laudal, D. L.
1989-01-01
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
Low-Rank Representation for Incomplete Data
Directory of Open Access Journals (Sweden)
Jiarong Shi
2014-01-01
Full Text Available Low-rank matrix recovery (LRMR has been becoming an increasingly popular technique for analyzing data with missing entries, gross corruptions, and outliers. As a significant component of LRMR, the model of low-rank representation (LRR seeks the lowest-rank representation among all samples and it is robust for recovering subspace structures. This paper attempts to solve the problem of LRR with partially observed entries. Firstly, we construct a nonconvex minimization by taking the low rankness, robustness, and incompletion into consideration. Then we employ the technique of augmented Lagrange multipliers to solve the proposed program. Finally, experimental results on synthetic and real-world datasets validate the feasibility and effectiveness of the proposed method.
Texture Repairing by Unified Low Rank Optimization
Institute of Scientific and Technical Information of China (English)
Xiao Liang; Xiang Ren; Zhengdong Zhang; Yi Ma
2016-01-01
In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.
Low-rank quadratic semidefinite programming
Yuan, Ganzhao
2013-04-01
Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.
Denoising MR Spectroscopic Imaging Data With Low-Rank Approximations
Nguyen, Hien M.; Peng, Xi; Do, Minh N.; Liang, Zhi-Pei
2012-01-01
This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singula...
Weighted Discriminative Dictionary Learning based on Low-rank Representation
Chang, Heyou; Zheng, Hao
2017-01-01
Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods.
Low-Rank Sparse Coding for Image Classification
Zhang, Tianzhu
2013-12-01
In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.
Accurate Jones Matrix of the Practical Faraday Rotator
Institute of Scientific and Technical Information of China (English)
王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝
2003-01-01
The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.
SAR moving target imaging using sparse and low-rank decomposition
Ni, Kang-Yu; Rao, Shankar
2014-05-01
We propose a method to image a complex scene with spotlight synthetic aperture radar (SAR) despite the presence of multiple moving targets. Many recent methods use sparsity-based reconstruction coupled with phase error corrections of moving targets to reconstruct stationary scenes. However, these methods rely on the assumption that the scene itself is sparse and thus unfortunately cannot handle realistic SAR scenarios with complex backgrounds consisting of more than just a few point targets. Our method makes use of sparse and low-rank (SLR) matrix decomposition, an efficient method for decomposing a low-rank matrix and sparse matrix from their sum. For detecting the moving targets and reconstructing the stationary background, SLR uses a convex optimization model that penalizes the nuclear norm of the low rank background structure and the L1 norm of the sparse moving targets. We propose an L1-norm regularization reconstruction method to form the input data matrix, which is grossly corrupted by the moving targets. Each column of the input matrix is a reconstructed SAR image with measurements from a small number of azimuth angles. The use of the L1-norm regularization and a sparse transform permits us to reconstruct the scene with significantly fewer measurements so that moving targets are approximately stationary. We demonstrate our SLR-based approach using simulations adapted from the GOTCHA Volumetric SAR data set. These simulations show that SLR can accurately image multiple moving targets with different individual motions in complex scenes where methods that assume a sparse scene would fail.
Low-rank sparse learning for robust visual tracking
Zhang, Tianzhu
2012-01-01
In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.
基于低秩和局部约束矩阵估计的链接预测方法%Link Prediction with Low-Rank and Local Constraint Matrix Estimation
Institute of Scientific and Technical Information of China (English)
刘冶; 印鉴; 邓泽亚; 王智圣; 潘炎
2015-01-01
In the big data era, the research on link prediction for social network on the Web and other complex net-works has attracted popular interest. Some methods with link prediction have been widely used in social network relationship mining, individual recommendation and biological pharmacy. In the link prediction problem, the simi-larity matrix is selected for representing the probability of existed links between any nodes. Therefore, the calcula-tion method for estimating the similarity matrix becomes the most crucial step. Recent years, most research focuses on the methods based on data analysis which construct the similarity relationship matrix by data-dependent machine learning and optimization algorithm. This paper proposes a novel data-dependent link prediction method with global low-rank structure assumption and local constraint of node features in the network. The new proposed method is designed for scalable divide-and-conquer calculation for complex network and suitable for distributed computation. Extensive experiments on several real-world datasets show that the proposed link prediction measure obtains competitive per-formance compared with the baselines. The results also indicate the new algorithm is effective, robust and scalable for complex networks.%在大数据时代，互联网社会网络和其他复杂网络中的链接预测问题研究成为热门领域。链接预测相关的方法已被广泛地应用于社会网络关系挖掘、个性化推荐和生物制药等领域。在链接预测问题中，通常使用相似性矩阵来表示网络中任意节点之间存在链接的可能性，因此相似性矩阵的计算是链接预测中至关重要的一步。近年来的研究中，大多数方法是基于已知网络中数据的分析，通过网络潜在结构设计机器学习算法构造相似性矩阵。在全局低秩的网络结构假设下，结合网络中节点特征的局部约束，提出了一种基于数据的链接预测优化算
Estimation of Low-Rank Covariance Function
Koltchinskii, Vladimir; Lounici, Karim; Tsybakov, Alexander B.
2015-01-01
We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), t\\in [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the low rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$...
Denoising MR spectroscopic imaging data with low-rank approximations.
Nguyen, Hien M; Peng, Xi; Do, Minh N; Liang, Zhi-Pei
2013-01-01
This paper addresses the denoising problem associated with magnetic resonance spectroscopic imaging (MRSI), where signal-to-noise ratio (SNR) has been a critical problem. A new scheme is proposed, which exploits two low-rank structures that exist in MRSI data, one due to partial separability and the other due to linear predictability. Denoising is performed by arranging the measured data in appropriate matrix forms (i.e., Casorati and Hankel) and applying low-rank approximations by singular value decomposition (SVD). The proposed method has been validated using simulated and experimental data, producing encouraging results. Specifically, the method can effectively denoise MRSI data in a wide range of SNR values while preserving spatial-spectral features. The method could prove useful for denoising MRSI data and other spatial-spectral and spatial-temporal imaging data as well.
Image Inpainting Algorithm Based on Low-Rank Approximation and Texture Direction
Directory of Open Access Journals (Sweden)
Jinjiang Li
2014-01-01
Full Text Available Existing image inpainting algorithm based on low-rank matrix approximation cannot be suitable for complex, large-scale, damaged texture image. An inpainting algorithm based on low-rank approximation and texture direction is proposed in the paper. At first, we decompose the image using low-rank approximation method. Then the area to be repaired is interpolated by level set algorithm, and we can reconstruct a new image by the boundary values of level set. In order to obtain a better restoration effect, we make iteration for low-rank decomposition and level set interpolation. Taking into account the impact of texture direction, we segment the texture and make low-rank decomposition at texture direction. Experimental results show that the new algorithm is suitable for texture recovery and maintaining the overall consistency of the structure, which can be used to repair large-scale damaged image.
The optimized expansion based low-rank method for wavefield extrapolation
Wu, Zedong
2014-03-01
Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.
Moving object detection via low-rank total variation regularization
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
Analysis and Improvement of Low Rank Representation for Subspace segmentation
Siming, Wei
2011-01-01
We analyze and improve low rank representation (LRR), the state-of-the-art algorithm for subspace segmentation of data. We prove that for the noiseless case, the optimization model of LRR has a unique solution, which is the shape interaction matrix (SIM) of the data matrix. So in essence LRR is equivalent to factorization methods. We also prove that the minimum value of the optimization model of LRR is equal to the rank of the data matrix. For the noisy case, we show that LRR can be approximated as a factorization method that combines noise removal by column sparse robust PCA. We further propose an improved version of LRR, called Robust Shape Interaction (RSI), which uses the corrected data as the dictionary instead of the noisy data. RSI is more robust than LRR when the corruption in data is heavy. Experiments on both synthetic and real data testify to the improved robustness of RSI.
Robust Generalized Low Rank Approximations of Matrices.
Directory of Open Access Journals (Sweden)
Jiarong Shi
Full Text Available In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM. We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Biodepolymerization studies of low rank Indian coals
Energy Technology Data Exchange (ETDEWEB)
Selvi, V.A.; Banerjee, R.; Ram, L.C.; Singh, G. [FRI, Dhanbad (India). Environmental Management Division
2009-10-15
Biodepolymerization of some of the lower rank Indian coals by Pleurotus djamor, Pleurotus citrinopileatus and Aspergillus species were studied in a batch system. The main disadvantage in burning low rank coals is the low calorific values. To get the maximum benefit from the low rank coals, the non fuel uses of coals needs to be explored. The liquefaction of coals is the preliminary processes for such approaches. The present study is undertaken specifically to investigate the optimization of bio depolymerization of Neyveli lignite by P. djmor. The pH of the media reached a constant value of about 7.8 by microbial action. The effect of different carbon and nitrogen sources and influence of chelators and metal ions on depolymerization of lignite were also studied. Lignite was solubilized by P. djamor only to a limited extent without the addition of carbon and nitrogen sources. Sucrose was the best suitable carbon source for coal depolymerization by P. djamor and sodium nitrate followed by urea was the best nitrogen source. The Chelators like salicylic acid, TEA and metal ions Mg{sup 2+}, Fe{sup 3+}, Ca{sup 2+}, Cu{sup 2+}, Mn{sup 2+} has enhanced the lignite solubilization process. The finding of the study showed that, compared to sub-bituminous and bituminous coal, the lignite has higher rate of solubilization activity.
Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models
El Gharamti, Mohamad
2010-12-01
Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
Negahban, Sahand
2009-01-01
High-dimensional inference refers to problems of statistical estimation in which the ambient dimension of the data may be comparable to or possibly even larger than the sample size. We study an instance of high-dimensional inference in which the goal is to estimate a matrix $\\Theta^* \\in \\real^{k \\times p}$ on the basis of $N$ noisy observations, and the unknown matrix $\\Theta^*$ is assumed to be either exactly low rank, or ``near'' low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider an $M$-estimator based on regularization by the trace or nuclear norm over matrices, and analyze its performance under high-dimensional scaling. We provide non-asymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and then illustrate their consequences for a number of specific matrix models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low-rank matrices fro...
Robust Recovery of Subspace Structures by Low-Rank Representation
Liu, Guangcan; Yan, Shuicheng; Sun, Ju; Yu, Yong; Ma, Yi
2010-01-01
Data that arises from computer vision and image processing is often characterized by a mixture of multiple linear (or affine) subspaces, leading to the challenging problem of subspace segmentation. We observe that the heart of segmentation is to deal with the data that may not strictly follow subspace structures, i.e., to handle the data corrupted by noise. In this work we therefore address the subspace recovery problem. Given a set of data samples approximately drawn from a union of multiple subspaces, our goal is to segment the samples into their respective subspaces and correct the possible noise as well, i.e., our goal is to recover the subspace structures from corrupted data. To this end, we propose low-rank representation (LRR) for recovering a low-rank data matrix from corrupted observations. The recovery is performed by seeking the lowest-rank representation among all the candidates that can represent the data vectors as linear combinations of the basis in a given dictionary. LRR fits well the subspac...
Biogasification of low-rank coal
Energy Technology Data Exchange (ETDEWEB)
Harding, R.; Czarnecki, S.; Isbister, J.; Barik, S. (ARCTECH, Inc., Chantilly, VA (United States))
1993-02-01
ARCTECH is developing a coal biogasification technology, The MicGAS Process'' for producing clean fuel forms such as methane. The overall objective of this research project was to characterize and construct an efficient coal gasifying capable of converting Texas lignite to methane. The technical feasibility for bioconversion of Texas lignite to methane, volatile fatty acids, alcohols, and other soluble organic products has been demonstrated. Several biogasification were evaluated for their ability to degrade low-rank coals to methane, and Mic-1 a mixed culture derived from a wood-eating- Zootermopsis termite species, was identified as the most active and efficient for biogasification of Texas lignite. Parameters such as pH, temperature, redox potential, coal particle size, coal solids loadings, culture age, nutrient amendments, and biomass concentration were studied to determine the optimum conditions required for efficient biogasification of coal. Analytical methods for monitoring the production of methane, degradation intermediates, and biomass were developed. Most significant achievements were: (1) development of analytical methodology to monitor coal biogasification; (2) confirmation of biogasification efficiency of Mic-1 culture; (3) ability of Mic-1 consortium to retain coal-degrading activity when grown in the absence of coal; and (4) significantly higher (ca. 26%) methane production from micronized coal (ca. 10 gm) than from larger coal particle sizes.
Efficient Radio Map Construction Based on Low-Rank Approximation for Indoor Positioning
Directory of Open Access Journals (Sweden)
Yongli Hu
2013-01-01
Full Text Available Fingerprint-based positioning in a wireless local area network (WLAN environment has received much attention recently. One key issue for the positioning method is the radio map construction, which generally requires significant effort to collect enough measurements of received signal strength (RSS. Based on the observation that RSSs have high spatial correlation, we propose an efficient radio map construction method based on low-rank approximation. Different from the conventional interpolation methods, the proposed method represents the distribution of RSSs as a low-rank matrix and constructs the dense radio map from relative sparse measurements by a revised low-rank matrix completion method. To evaluate the proposed method, both simulation tests and field experiments have been conducted. The experimental results indicate that the proposed method can reduce the RSS measurements evidently. Moreover, using the constructed radio maps for positioning, the positioning accuracy is also improved.
Low-Rank and Sparsity Analysis Applied to Speech Enhancement Via Online Estimated Dictionary
Sun, Pengfei; Qin, Jun
2016-12-01
We propose an online estimated dictionary based single channel speech enhancement algorithm, which focuses on low rank and sparse matrix decomposition. In this proposed algorithm, a noisy speech spectral matrix is considered as the summation of low rank background noise components and an activation of the online speech dictionary, on which both low rank and sparsity constraints are imposed. This decomposition takes the advantage of local estimated dictionary high expressiveness on speech components. The local dictionary can be obtained through estimating the speech presence probability by applying Expectation Maximal algorithm, in which a generalized Gamma prior for speech magnitude spectrum is used. The evaluation results show that the proposed algorithm achieves significant improvements when compared to four other speech enhancement algorithms.
Proceedings of the sixteenth biennial low-rank fuels symposium
Energy Technology Data Exchange (ETDEWEB)
1991-01-01
Low-rank coals represent a major energy resource for the world. The Low-Rank Fuels Symposium, building on the traditions established by the Lignite Symposium, focuses on the key opportunities for this resource. This conference offers a forum for leaders from industry, government, and academia to gather to share current information on the opportunities represented by low-rank coals. In the United States and throughout the world, the utility industry is the primary user of low-rank coals. As such, current experiences and future opportunities for new technologies in this industry were the primary focuses of the symposium.
Recovering low-rank matrices from few coefficients in any basis
Gross, David
2009-01-01
We establish novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler, and more general than previous approaches. It is shown that an unknown (n x n) matrix of rank r can be efficiently reconstructed given knowledge of only O(n r nu log^2n) randomly sampled expansion coefficients with respect to any given matrix basis. The number nu quantifies the "degree of incoherence" between the unknown matrix and the basis. We discuss bases with respect to which every low-rank matrix is incoherent. Existing work concentrated mostly on the problem of "matrix completion", where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates seem to be slightly tighter. This work...
An Approach to Streaming Video Segmentation With Sub-Optimal Low-Rank Decomposition.
Li, Chenglong; Lin, Liang; Zuo, Wangmeng; Wang, Wenzhong; Tang, Jin
2016-05-01
This paper investigates how to perform robust and efficient video segmentation while suppressing the effects of data noises and/or corruptions, and an effective approach is introduced to this end. First, a general algorithm, called sub-optimal low-rank decomposition (SOLD), is proposed to pursue the low-rank representation for video segmentation. Given the data matrix formed by supervoxel features of an observed video sequence, SOLD seeks a sub-optimal solution by making the matrix rank explicitly determined. In particular, the representation coefficient matrix with the fixed rank can be decomposed into two sub-matrices of low rank, and then we iteratively optimize them with closed-form solutions. Moreover, we incorporate a discriminative replication prior into SOLD based on the observation that small-size video patterns tend to recur frequently within the same object. Second, based on SOLD, we present an efficient inference algorithm to perform streaming video segmentation in both unsupervised and interactive scenarios. More specifically, the constrained normalized-cut algorithm is adopted by incorporating the low-rank representation with other low level cues and temporal consistent constraints for spatio-temporal segmentation. Extensive experiments on two public challenging data sets VSB100 and SegTrack suggest that our approach outperforms other video segmentation approaches in both accuracy and efficiency.
Multi-Label Classiﬁcation Based on Low Rank Representation for Image Annotation
Directory of Open Access Journals (Sweden)
Qiaoyu Tan
2017-01-01
Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.
Akbudak, Kadir
2017-05-11
Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.
Dense Error Correction for Low-Rank Matrices via Principal Component Analysis
Ganesh, Arvind; Li, Xiaodong; Candes, Emmanuel J; Ma, Yi
2010-01-01
We consider the problem of recovering a low-rank matrix when some of its entries, whose locations are not known a priori, are corrupted by errors of arbitrarily large magnitude. It has recently been shown that this problem can be solved efficiently and effectively by a convex program named Principal Component Pursuit (PCP), provided that the fraction of corrupted entries and the rank of the matrix are both sufficiently small. In this paper, we extend that result to show that the same convex program, with a slightly improved weighting parameter, exactly recovers the low-rank matrix even if "almost all" of its entries are arbitrarily corrupted, provided the signs of the errors are random. We corroborate our result with simulations on randomly generated matrices and errors.
On low-rank updates to the singular value and Tucker decompositions
Energy Technology Data Exchange (ETDEWEB)
O' Hara, M J
2009-10-06
The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.
Fast Low-Rank Shared Dictionary Learning for Image Classification.
Vu, Tiep Huu; Monga, Vishal
2017-11-01
Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.
Linear low-rank approximation and nonlinear dimensionality reduction
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
［1］Bishop, C. M., Svensen, M., Williams, C. K. I., GTM: the generative topographic mapping, Neural Computation,1998, 10: 215-234.［2］Freedman, D., Efficient simplicial reconstructions of manifolds from their samples, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24: 1349-1357.［3］Hinton, G., Roweis, S., Stochastic neighbor embedding, Neural Information Processing Systems, 2003, 15:833-840.［4］Kohonen, T., Self-organizing Maps, 3rd ed., Berlin: Springer-Verlag, 2000.［5］Ramsay, J. O., Silverman, B. W., Applied Functional Data Analysis, Berlin: Springer-Verlag, 2002.［6］Roweis, S., Saul, L., Nonlinear dimensionality reduction by locally linear embedding, Science, 2000, 290:2323-2326.［7］Tenenbaum, J., De Silva, V., Langford, J., A global geometric framework for nonlinear dimension reduction,Science, 2000, 290:2319-2323.［8］Xu, G., Kailath, T., Fast subspace decompsotion, IEEE Transactions on Signal Processing, 1994, 42: 539-551.［9］Xu, G., Zha, H., Golub, G. et al., Fast algorithms for updating signal subspaces, IEEE Transactions on Circuits and Systems, 1994, 41: 537-549.［10］Zha, H., Marques, O., Simon, H., Large-scale SVD and subspace-based methods for information retrieval, Proceedings of Irregular '98, Lecture Notes in Computer Science, 1998, 1457: 29-42.［11］Zhang, Z., Zha, H., Structure and perturbation analysis of truncated SVDs for column-partitioned matrices,SIAM Journal on Matrix Analysis and Applications, 2001, 22: 1245-1262.［12］Zhang, Z., Zha, H., Simon, H., Low-rank approximations with sparse factors I: basic algorithms and error analysis, SIAM Journal on Matrix Analysis and Applications, 2002, 23: 706-727.［13］Stewart, G. W., Four algorithms for the efficient computation of truncated pivoted QR approximation to a sparse matrix, Numerische Mathematik, 1999, 83:313-323.［14］Golub, G., Van Loan, C., Matrix Computations, 3nd ed., Baltimore, Maryland: Johns Hopkins University Press,1996.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Low-rank and sparse modeling for visual analysis
Fu, Yun
2014-01-01
This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic
Wen, Zaidao; Hou, Biao; Jiao, Licheng
2016-06-30
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Combinatorial conditions for low rank solutions in semidefinite programming
Varvitsiotis, A.
2013-01-01
In this thesis we investigate combinatorial conditions that guarantee the existence of low-rank optimal solutions to semidefinite programs. Results of this type are important for approximation algorithms and for the study of geometric representations of graphs. The structure of the thesis is as
Combinatorial conditions for low rank solutions in semidefinite programming
A. Varvitsiotis (Antonios)
2013-01-01
htmlabstractIn this thesis we investigate combinatorial conditions that guarantee the existence of low-rank optimal solutions to semidefinite programs. Results of this type are important for approximation algorithms and for the study of geometric representations of graphs. The structure of the
Combinatorial conditions for low rank solutions in semidefinite programming
Varvitsiotis, A.
2013-01-01
In this thesis we investigate combinatorial conditions that guarantee the existence of low-rank optimal solutions to semidefinite programs. Results of this type are important for approximation algorithms and for the study of geometric representations of graphs. The structure of the thesis is as foll
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Directory of Open Access Journals (Sweden)
Xin Tang
Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our
Accurate variational electronic structure calculations with the density matrix renormalization group
Wouters, Sebastian
2014-01-01
During the past 15 years, the density matrix renormalization group (DMRG) has become increasingly important for ab initio quantum chemistry. The underlying matrix product state (MPS) ansatz is a low-rank decomposition of the full configuration interaction tensor. The virtual dimension of the MPS controls the size of the corner of the many-body Hilbert space that can be reached. Whereas the MPS ansatz will only yield an efficient description for noncritical one-dimensional systems, it can still be used as a variational ansatz for other finite-size systems. Rather large virtual dimensions are then required. The two most important aspects to reduce the corresponding computational cost are a proper choice and ordering of the active space orbitals, and the exploitation of the symmetry group of the Hamiltonian. By taking care of both aspects, DMRG becomes an efficient replacement for exact diagonalization in quantum chemistry. DMRG and Hartree-Fock theory have an analogous structure. The former can be interpreted a...
Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation
Lin, Zhouchen; Su, Zhixun
2011-01-01
Low-rank representation (LRR) is an effective method for subspace clustering and has found wide applications in computer vision and machine learning. The existing LRR solver is based on the alternating direction method (ADM). It suffers from $O(n^3)$ computation complexity due to the matrix-matrix multiplications and matrix inversions, even if partial SVD is used. Moreover, introducing auxiliary variables also slows down the convergence. Such a heavy computation load prevents LRR from large scale applications. In this paper, we generalize ADM by linearizing the quadratic penalty term and allowing the penalty to change adaptively. We also propose a novel rule to update the penalty such that the convergence is fast. With our linearized ADM with adaptive penalty (LADMAP) method, it is unnecessary to introduce auxiliary variables and invert matrices. The matrix-matrix multiplications are further alleviated by using the skinny SVD representation technique. As a result, we arrive at an algorithm for LRR with comple...
Low rank updated LS-SVM classifiers for fast variable selection.
Ojeda, Fabian; Suykens, Johan A K; De Moor, Bart
2008-01-01
Least squares support vector machine (LS-SVM) classifiers are a class of kernel methods whose solution follows from a set of linear equations. In this work we present low rank modifications to the LS-SVM classifiers that are useful for fast and efficient variable selection. The inclusion or removal of a candidate variable can be represented as a low rank modification to the kernel matrix (linear kernel) of the LS-SVM classifier. In this way, the LS-SVM solution can be updated rather than being recomputed, which improves the efficiency of the overall variable selection process. Relevant variables are selected according to a closed form of the leave-one-out (LOO) error estimator, which is obtained as a by-product of the low rank modifications. The proposed approach is applied to several benchmark data sets as well as two microarray data sets. When compared to other related algorithms used for variable selection, simulations applying our approach clearly show a lower computational complexity together with good stability on the generalization error.
Hyperspectral Anomaly Detection Based on Low-Rank Representation and Learned Dictionary
Directory of Open Access Journals (Sweden)
Yubin Niu
2016-03-01
Full Text Available In this paper, a novel hyperspectral anomaly detector based on low-rank representation (LRR and learned dictionary (LD has been proposed. This method assumes that a two-dimensional matrix transformed from a three-dimensional hyperspectral imagery can be decomposed into two parts: a low rank matrix representing the background and a sparse matrix standing for the anomalies. The direct application of LRR model is sensitive to a tradeoff parameter that balances the two parts. To mitigate this problem, a learned dictionary is introduced into the decomposition process. The dictionary is learned from the whole image with a random selection process and therefore can be viewed as the spectra of the background only. It also requires a less computational cost with the learned dictionary. The statistic characteristic of the sparse matrix allows the application of basic anomaly detection method to obtain detection results. Experimental results demonstrate that, compared to other anomaly detection methods, the proposed method based on LRR and LD shows its robustness and has a satisfactory anomaly detection result.
Enzymatic depolymerization of low-rank coal (lignite)
Energy Technology Data Exchange (ETDEWEB)
Hofrichter, M.; Ziegenhagen, D.; Sorge, S.; Bublitz, F.; Fritsche, W. [Jena Univ. (Germany). Inst. fuer Mikrobiologie
1997-12-31
Ligninolytic basidiomycetes (wood and litter decaying fungi) have the ability to degrade low-rank coal (lignite). Extracellular manganese peroxidase (MnP) is the decisive enzyme in the depolymerization process both of coal derived humic substances and native coal. The depolymerization of coal occurred via Mn{sup 3+}-ions acting as primary mediator and can be considerably enhanced by certain thiols acting as secondary mediators. The depolymerization process leads finally to complex mixtures of fulvic acid-like compounds. (orig.)
Carbon monoxide adsorptive capability of low rank coal's maceral
Institute of Scientific and Technical Information of China (English)
WANG Yue-hong; GUO Li-wen; ZHANG Jiu-ling
2008-01-01
The centrifugal separation with gravity experiment was made for getting every pure macerals like inertinite and vitrinite,and the isothermal adsorption tests of pure maceral are carried out at 30,40,50,55,60,65 ℃,respectively,after analyzing the proximate,element and maceral of coal samples,which was aimed to study the CO adsorptive capability of every maceral of low rank coal at difference temperature and pressure.The results show that the adsorption isotherm of CO can be described by Langmuir equation because it belongs to the Type I adsorption isotherm at low temperature(T≤50 ℃),and the temperature effect on coal adsorption is greater than of pressure in lower temperature and pressure area; what's more,the relationship is linear between the coal adsorption quantity of CO and the pressure at high temperature(T＞50 ℃),it can be described by Henry equation(Q=KP),which increases with pressure.Both temperature and pressure has great influence on CO adsorptive capability of low rank coals,especially the temperature's effect is so very complex that the mechanism need to study further.At the same time,the volatile matter,inertinite,oxygen-function groups and negative functional groups are high populady in low rank coal samples,especially,the content of hydroxide(--OH) has great influence on CO adsorption in that the inertinite has stronger effect than vitrinite on adsorptive capability of low rank coal samples,the result is same to the research on CH4 adsorption.
Linear low-rank approximation and nonlinear dimensionality reduction
Institute of Scientific and Technical Information of China (English)
ZHANG Zhenyue; ZHA Hongyuan
2004-01-01
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of column-partitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.
Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
Zhang, Tianzhu
2014-06-19
Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.
Robust subspace estimation using low-rank optimization theory and applications
Oreifej, Omar
2014-01-01
Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame
CO2 SEQUESTRATION POTENTIAL OF TEXAS LOW-RANK COALS
Energy Technology Data Exchange (ETDEWEB)
Duane A. McVay; Walter B. Ayers Jr; Jerry L. Jensen
2004-11-01
The objectives of this project are to evaluate the feasibility of carbon dioxide (CO{sub 2}) sequestration in Texas low-rank coals and to determine the potential for enhanced coalbed methane (CBM) recovery as an added benefit of sequestration. there were two main objectives for this reporting period. first, they wanted to collect wilcox coal samples from depths similar to those of probable sequestration sites, with the objective of determining accurate parameters for reservoir model description and for reservoir simulation. The second objective was to pursue opportunities for determining permeability of deep Wilcox coal to use as additional, necessary data for modeling reservoir performance during CO{sub 2} sequestration and enhanced coalbed methane recovery. In mid-summer, Anadarko Petroleum Corporation agreed to allow the authors to collect Wilcox Group coal samples from a well that was to be drilled to the Austin Chalk, which is several thousand feet below the Wilcox. In addition, they agreed to allow them to perform permeability tests in coal beds in an existing shut-in well. Both wells are in the region of the Sam K. Seymour power station, a site that they earlier identified as a major point source of CO{sub 2}. They negotiated contracts for sidewall core collection and core analyses, and they began discussions with a service company to perform permeability testing. To collect sidewall core samples of the Wilcox coals, they made structure and isopach maps and cross sections to select coal beds and to determine their depths for coring. On September 29, 10 sidewall core samples were obtained from 3 coal beds of the Lower Calvert Bluff Formation of the Wilcox Group. The samples were desorbed in 4 sidewall core canisters. Desorbed gas samples were sent to a laboratory for gas compositional analyses, and the coal samples were sent to another laboratory to measure CO{sub 2}, CH{sub 4}, and N{sub 2} sorption isotherms. All analyses should be finished by the end of
Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable
Energy Technology Data Exchange (ETDEWEB)
Menkov, V. [Indiana Univ., Bloomington, IN (United States)
1996-12-31
An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.
Energy Technology Data Exchange (ETDEWEB)
Akkaya, Ali Volkan [Department of Mechanical Engineering, Yildiz Technical University, 34349 Besiktas, Istanbul (Turkey)
2009-02-15
In this paper, multiple nonlinear regression models for estimation of higher heating value of coals are developed using proximate analysis data obtained generally from the low rank coal samples as-received basis. In this modeling study, three main model structures depended on the number of proximate analysis parameters, which are named the independent variables, such as moisture, ash, volatile matter and fixed carbon, are firstly categorized. Secondly, sub-model structures with different arrangements of the independent variables are considered. Each sub-model structure is analyzed with a number of model equations in order to find the best fitting model using multiple nonlinear regression method. Based on the results of nonlinear regression analysis, the best model for each sub-structure is determined. Among them, the models giving highest correlation for three main structures are selected. Although the selected all three models predicts HHV rather accurately, the model involving four independent variables provides the most accurate estimation of HHV. Additionally, when the chosen model with four independent variables and a literature model are tested with extra proximate analysis data, it is seen that that the developed model in this study can give more accurate prediction of HHV of coals. It can be concluded that the developed model is effective tool for HHV estimation of low rank coals. (author)
The characterization of organomineral components of low-rank coals
Energy Technology Data Exchange (ETDEWEB)
Martinez-Tarazona, M.R.; Palacios, J.M.; Martinez-Alonso, A.; Tascon, J.M.D. (Instituto Nacional del Carbon y sus Derivados, Oviedo (Spain))
1990-04-01
A methodology of characterizing organomineral components of As Pontes and Meirama Spanish brown coals is developed. Analytical electron microscopy provided indirect evidence for the association of alkaline-earth elements to the organic matter of coal. Fourier transform infrared spectroscopy yielded information on the nature of bonding of inorganic elements to carboxyl groups. Quantitative results were obtained by methods implying extraction or oxidative leaching followed by analysis of cations. The resulting set of procedures leads to a comprehensive characterization of this type of coal components, and deserves interest due to their role in combustion and gasification of low-rank coals. 18 refs., 4 figs., 1 tab.
Low-Rank Coal Grinding Performance Versus Power Plant Performance
Energy Technology Data Exchange (ETDEWEB)
Rajive Ganguli; Sukumar Bandopadhyay
2008-12-31
The intent of this project was to demonstrate that Alaskan low-rank coal, which is high in volatile content, need not be ground as fine as bituminous coal (typically low in volatile content) for optimum combustion in power plants. The grind or particle size distribution (PSD), which is quantified by percentage of pulverized coal passing 74 microns (200 mesh), affects the pulverizer throughput in power plants. The finer the grind, the lower the throughput. For a power plant to maintain combustion levels, throughput needs to be high. The problem of particle size is compounded for Alaskan coal since it has a low Hardgrove grindability index (HGI); that is, it is difficult to grind. If the thesis of this project is demonstrated, then Alaskan coal need not be ground to the industry standard, thereby alleviating somewhat the low HGI issue (and, hopefully, furthering the salability of Alaskan coal). This project studied the relationship between PSD and power plant efficiency, emissions, and mill power consumption for low-rank high-volatile-content Alaskan coal. The emissions studied were CO, CO{sub 2}, NO{sub x}, SO{sub 2}, and Hg (only two tests). The tested PSD range was 42 to 81 percent passing 76 microns. Within the tested range, there was very little correlation between PSD and power plant efficiency, CO, NO{sub x}, and SO{sub 2}. Hg emissions were very low and, therefore, did not allow comparison between grind sizes. Mill power consumption was lower for coarser grinds.
Enhanced low-rank + sparsity decomposition for speckle reduction in optical coherence tomography
Kopriva, Ivica; Shi, Fei; Chen, Xinjian
2016-07-01
Speckle artifacts can strongly hamper quantitative analysis of optical coherence tomography (OCT), which is necessary to provide assessment of ocular disorders associated with vision loss. Here, we introduce a method for speckle reduction, which leverages from low-rank + sparsity decomposition (LRpSD) of the logarithm of intensity OCT images. In particular, we combine nonconvex regularization-based low-rank approximation of an original OCT image with a sparsity term that incorporates the speckle. State-of-the-art methods for LRpSD require a priori knowledge of a rank and approximate it with nuclear norm, which is not an accurate rank indicator. As opposed to that, the proposed method provides more accurate approximation of a rank through the use of nonconvex regularization that induces sparse approximation of singular values. Furthermore, a rank value is not required to be known a priori. This, in turn, yields an automatic and computationally more efficient method for speckle reduction, which yields the OCT image with improved contrast-to-noise ratio, contrast and edge fidelity. The source code will be available at www.mipav.net/English/research/research.html.
Large-scale 3-D EM modelling with a Block Low-Rank multifrontal direct solver
Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo
2017-06-01
We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3-D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so-called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3-D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10, 30 and 40 per cent of their full-rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O(N2) for the full-rank solver to O(Nm) with m = 1.4-1.6. The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3-D controlled-source electromagnetic Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.
CO2 Sequestration Potential of Texas Low-Rank Coals
Energy Technology Data Exchange (ETDEWEB)
Duane McVay; Walter Ayers, Jr.; Jerry Jensen; Jorge Garduno; Gonzola Hernandez; Rasheed Bello; Rahila Ramazanova
2006-08-31
Injection of CO{sub 2} in coalbeds is a plausible method of reducing atmospheric emissions of CO{sub 2}, and it can have the additional benefit of enhancing methane recovery from coal. Most previous studies have evaluated the merits of CO{sub 2} disposal in high-rank coals. The objective of this research was to determine the technical and economic feasibility of CO{sub 2} sequestration in, and enhanced coalbed methane (ECBM) recovery from, low-rank coals in the Texas Gulf Coast area. Our research included an extensive coal characterization program, including acquisition and analysis of coal core samples and well transient test data. We conducted deterministic and probabilistic reservoir simulation and economic studies to evaluate the effects of injectant fluid composition (pure CO{sub 2} and flue gas), well spacing, injection rate, and dewatering on CO{sub 2} sequestration and ECBM recovery in low-rank coals of the Calvert Bluff formation of the Texas Wilcox Group. Shallow and deep Calvert Bluff coals occur in two, distinct, coalbed gas petroleum systems that are separated by a transition zone. Calvert Bluff coals < 3,500 ft deep are part of a biogenic coalbed gas system. They have low gas content and are part of a freshwater aquifer. In contrast, Wilcox coals deeper than 3,500 ft are part of a thermogenic coalbed gas system. They have high gas content and are part of a saline aquifer. CO{sub 2} sequestration and ECBM projects in Calvert Bluff low-rank coals of East-Central Texas must be located in the deeper, unmineable coals, because shallow Wilcox coals are part of a protected freshwater aquifer. Probabilistic simulation of 100% CO{sub 2} injection into 20 feet of Calvert Bluff coal in an 80-acre 5-spot pattern indicates that these coals can store 1.27 to 2.25 Bcf of CO{sub 2} at depths of 6,200 ft, with an ECBM recovery of 0.48 to 0.85 Bcf. Simulation results of flue gas injection (87% N{sub 2}-13% CO{sub 2}) indicate that these same coals can store 0.34 to 0
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Low-rank coal research: Volume 3, Combustion research: Final report. [Great Plains
Energy Technology Data Exchange (ETDEWEB)
Mann, M. D.; Hajicek, D. R.; Zobeck, B. J.; Kalmanovitch, D. P.; Potas, T. A.; Maas, D. J.; Malterer, T. J.; DeWall, R. A.; Miller, B. G.; Johnson, M. D.
1987-04-01
Volume III, Combustion Research, contains articles on fluidized bed combustion, advanced processes for low-rank coal slurry production, low-rank coal slurry combustion, heat engine utilization of low-rank coals, and Great Plains Gasification Plant. These articles have been entered individually into EDB and ERA. (LTN)
DEVELOPMENT OF CARBON PRODUCTS FROM LOW-RANK COALS
Energy Technology Data Exchange (ETDEWEB)
Edwin S. Olson
2001-07-01
The goal of this project is to facilitate the production of carbon fibers from low-rank coal (LRC) tars. To this end, the effect of demineralization on the tar yields and composition was investigated using high-sodium and high-calcium lignites commonly mined in North Dakota. These coals were demineralized by ion exchange with ammonium acetate and by cation dissolution with nitric acid. Two types of thermal processing were investigated for obtaining suitable precursors for pitch and fiber production. Initially, tars were produced by simple pyrolysis of the set of samples at 650 C. Since these experiments produced little usable material from any of the samples, the coals were heated at moderate temperatures (380 and 400 C) in tetralin solvent to form and extract the plastic material (metaplast) that forms at these temperatures.
Direct liquefaction of low-rank coals under mild conditions
Energy Technology Data Exchange (ETDEWEB)
Braun, N.; Rinaldi, R. [Max-Planck-Institut fuer Kohlenforschung, Muelheim an der Ruhr (Germany)
2013-11-01
Due to decreasing of petroleum reserves, direct coal liquefaction is attracting renewed interest as an alternative process to produce liquid fuels. The combination of hydrogen peroxide and coal is not a new one. In the early 1980, Vasilakos and Clinton described a procedure for desulfurization by leaching coal with solutions of sulphuric acid/H{sub 2}O{sub 2}. But so far, H{sub 2}O{sub 2} has never been ascribed a major role in coal liquefaction. Herein, we describe a novel approach for liquefying low-rank coals using a solution of H{sub 2}O{sub 2} in presence of a soluble non-transition metal catalyst. (orig.)
Compressive sensing via nonlocal low-rank regularization.
Dong, Weisheng; Shi, Guangming; Li, Xin; Ma, Yi; Huang, Feng
2014-08-01
Sparsity has been widely exploited for exact reconstruction of a signal from a small number of random measurements. Recent advances have suggested that structured or group sparsity often leads to more powerful signal reconstruction techniques in various compressed sensing (CS) studies. In this paper, we propose a nonlocal low-rank regularization (NLR) approach toward exploiting structured sparsity and explore its application into CS of both photographic and MRI images. We also propose the use of a nonconvex log det ( X) as a smooth surrogate function for the rank instead of the convex nuclear norm and justify the benefit of such a strategy using extensive experiments. To further improve the computational efficiency of the proposed algorithm, we have developed a fast implementation using the alternative direction multiplier method technique. Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery.
Bayesian Framework with Non-local and Low-rank Constraint for Image Reconstruction
Tang, Zhonghe; Wang, Shengzhe; Huo, Jianliang; Guo, Hang; Zhao, Haibo; Mei, Yuan
2017-01-01
Built upon the similar methodology of 'grouping and collaboratively filtering', the proposed algorithm recovers image patches from the array of similar noisy patches based on the assumption that their noise-free versions or approximation lie in a low dimensional subspace and has a low rank. Based on the analysis of the effect of noise and perturbation on the singular value, a weighted nuclear norm is defined to replace the conventional nuclear norm. Corresponding low-rank decomposition model and singular value shrinkage operator are derived. Taking into account the difference between the distribution of the signal and the noise, the weight depends not only on the standard deviation of noise, but also on the rank of the noise-free matrix and the singular value itself. Experimental results in image reconstruction tasks show that at relatively low computational cost the performance of proposed method is very close to state-of-the-art reconstruction methods BM3D and LSSC even outperforms them in restoring and preserving structure
Learning Better Word Embedding by Asymmetric Low-Rank Pro jection of Knowledge Graph
Institute of Scientific and Technical Information of China (English)
Fei Tian; Bin Gao; En-Hong Chen; Tie-Yan Liu
2016-01-01
Word embedding, which refers to low-dimensional dense vector representations of natural words, has demon-strated its power in many natural language processing tasks. However, it may suffer from the inaccurate and incomplete information contained in the free text corpus as training data. To tackle this challenge, there have been quite a few studies that leverage knowledge graphs as an additional information source to improve the quality of word embedding. Although these studies have achieved certain success, they have neglected some important facts about knowledge graphs: 1) many relationships in knowledge graphs are many-to-one, one-to-many or even many-to-many, rather than simply one-to-one; 2) most head entities and tail entities in knowledge graphs come from very different semantic spaces. To address these issues, in this paper, we propose a new algorithm named ProjectNet. ProjectNet models the relationships between head and tail entities after transforming them with different low-rank projection matrices. The low-rank projection can allow non one-to-one relationships between entities, while different projection matrices for head and tail entities allow them to originate in different semantic spaces. The experimental results demonstrate that ProjectNet yields more accurate word embedding than previous studies, and thus leads to clear improvements in various natural language processing tasks.
Modeling of pseudoacoustic P-waves in orthorhombic media with a low-rank approximation
Song, Xiaolei
2013-06-04
Wavefield extrapolation in pseudoacoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We use the dispersion relation for scalar wave propagation in pseudoacoustic orthorhombic media to model acoustic wavefields. The wavenumber-domain application of the Laplacian operator allows us to propagate the P-waves exclusively, without imposing any conditions on the parameter range of stability. It also allows us to avoid dispersion artifacts commonly associated with evaluating the Laplacian operator in space domain using practical finite-difference stencils. To handle the corresponding space-wavenumber mixed-domain operator, we apply the low-rank approximation approach. Considering the number of parameters necessary to describe orthorhombic anisotropy, the low-rank approach yields space-wavenumber decomposition of the extrapolator operator that is dependent on space location regardless of the parameters, a feature necessary for orthorhombic anisotropy. Numerical experiments that the proposed wavefield extrapolator is accurate and practically free of dispersion. Furthermore, there is no coupling of qSv and qP waves because we use the analytical dispersion solution corresponding to the P-wave.
An Accurate Approach to Large-Scale IP Traffic Matrix Estimation
Jiang, Dingde; Hu, Guangmin
This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.
Moving Bed Gasification of Low Rank Alaska Coal
Directory of Open Access Journals (Sweden)
Mandar Kulkarni
2012-01-01
Full Text Available This paper presents process simulation of moving bed gasifier using low rank, subbituminous Usibelli coal from Alaska. All the processes occurring in a moving bed gasifier, drying, devolatilization, gasification, and combustion, are included in this model. The model, developed in Aspen Plus, is used to predict the effect of various operating parameters including pressure, oxygen to coal, and steam to coal ratio on the product gas composition. The results obtained from the simulation were compared with experimental data in the literature. The predicted composition of the product gas was in general agreement with the established results. Carbon conversion increased with increasing oxygen-coal ratio and decreased with increasing steam-coal ratio. Steam to coal ratio and oxygen to coal ratios impacted produced syngas composition, while pressure did not have a large impact on the product syngas composition. A nonslagging moving bed gasifier would have to be limited to an oxygen-coal ratio of 0.26 to operate below the ash softening temperature. Slagging moving bed gasifiers, not limited by operating temperature, could achieve carbon conversion efficiency of 99.5% at oxygen-coal ratio of 0.33. The model is useful for predicting performance of the Usibelli coal in a moving bed gasifier using different operating parameters.
Low rank extremal PPT states and unextendible product bases
Leinaas, Jon Magne; Sollid, Per Øyvind
2010-01-01
It is known how to construct, in a bipartite quantum system, a unique low rank entangled mixed state with positive partial transpose (a PPT state) from an unextendible product basis (a UPB), defined as an unextendible set of orthogonal product vectors. We point out that a state constructed in this way belongs to a continuous family of entangled PPT states of the same rank, all related by non-singular product transformations, unitary or non-unitary. The characteristic property of a state $\\rho$ in such a family is that its kernel $\\Ker\\rho$ has a generalized UPB, a basis of product vectors, not necessarily orthogonal, with no product vector in $\\Im\\rho$, the orthogonal complement of $\\Ker\\rho$. The generalized UPB in $\\Ker\\rho$ has the special property that it can be transformed to orthogonal form by a product transformation. In the case of a system of dimension $3\\times 3$, we give a complete parametrization of orthogonal UPBs. This is then a parametrization of families of rank 4 entangled (and extremal) PPT ...
Moessbauer spectroscopic investigation of low rank coal lithotypes
Energy Technology Data Exchange (ETDEWEB)
Kostova, I.; Markova, K.; Kuntchev, K. [Bulgarian Academy of Sciences, Sofia (Bulgaria). Inst. of Applied Mineralogy
1997-12-31
Low rank coal lithotypes - xylain, humovitrain, semifusain, fusain and liptain sampled from the Maritsa Iztok coal basin (Bulgaria) have been examined by Moessbauer spectroscopy with no pre-concentration procedures. The results are used to identify three iron species in coal lithotypes and show that covalent iron (Fe{sup II}) related to pyrite, is the main iron species in xylain, while in humovitrain ferric iron is dominant. The total quantity of iron species in semifusain, fusain and liptain is about the same but their distribution is different. Ferric iron dominates in all the three lithotypes. Ferrous iron, although present in smaller quantities, has a higher content in fusain than in semifusain. The results illustrate the type of oxidation processes which formed the coal lithotypes. A transformation of Fe{sup 2+} to Fe{sup 3+} has occurred as a result of differing oxidation processes. The intensity of that transformation increases during the destructive microbial oxidation and decreases during thermal oxidation and direct oxidation processes. The opposite transformation of ferric to ferrous iron has been achieved during both thermal oxidation and direct oxidation processes. 9 refs., 2 figs., 2 tabs.
Missing Modality Transfer Learning via Latent Low-Rank Constraint.
Ding, Zhengming; Shao, Ming; Fu, Yun
2015-11-01
Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage.
Liu, Xiaoming; Yang, Zhou; Wang, Jia; Liu, Jun; Zhang, Kai; Hu, Wei
2017-01-01
Image denoising is a crucial step before performing segmentation or feature extraction on an image, which affects the final result in image processing. In recent years, utilizing the self-similarity characteristics of the images, many patch-based image denoising methods have been proposed, but most of them, named the internal denoising methods, utilized the noisy image only where the performances are constrained by the limited information they used. We proposed a patch-based method, which uses a low-rank technique and targeted database, to denoise the optical coherence tomography (OCT) image. When selecting the similar patches for the noisy patch, our method combined internal and external denoising, utilizing the other images relevant to the noisy image, in which our targeted database is made up of these two kinds of images and is an improvement compared with the previous methods. Next, we leverage the low-rank technique to denoise the group matrix consisting of the noisy patch and the corresponding similar patches, for the fact that a clean image can be seen as a low-rank matrix and rank of the noisy image is much larger than the clean image. After the first-step denoising is accomplished, we take advantage of Gabor transform, which considered the layer characteristic of the OCT retinal images, to construct a noisy image before the second step. Experimental results demonstrate that our method compares favorably with the existing state-of-the-art methods.
Slaughter, Chris; Bagwell, Justin; Checkles, Costa; Sentis, Luis; Vishwanath, Sriram
2011-01-01
Motivated by an emerging theory of robust low-rank matrix representation, in this paper, we introduce a novel solution for online rigid-body motion registration. The goal is to develop algorithmic techniques that enable a robust, real-time motion registration solution suitable for low-cost, portable 3-D camera devices. Assuming 3-D image features are tracked via a standard tracker, the algorithm first utilizes Robust PCA to initialize a low-rank shape representation of the rigid body. Robust PCA finds the global optimal solution of the initialization, while its complexity is comparable to singular value decomposition. In the online update stage, we propose a more efficient algorithm for sparse subspace projection to sequentially project new feature observations onto the shape subspace. The lightweight update stage guarantees the real-time performance of the solution while maintaining good registration even when the image sequence is contaminated by noise, gross data corruption, outlying features, and missing ...
Shi, Ziqiang; Zheng, Tieran; Deng, Shiwen
2011-01-01
In this paper, a novel framework based on trace norm minimization for audio segment is proposed. In this framework, both the feature extraction and classification are obtained by solving corresponding convex optimization problem with trace norm regularization. For feature extraction, robust principle component analysis (robust PCA) via minimization a combination of the nuclear norm and the $\\ell_1$-norm is used to extract low-rank features which are robust to white noise and gross corruption for audio segments. These low-rank features are fed to a linear classifier where the weight and bias are learned by solving similar trace norm constrained problems. For this classifier, most methods find the weight and bias in batch-mode learning, which makes them inefficient for large-scale problems. In this paper, we propose an online framework using accelerated proximal gradient method. This framework has a main advantage in memory cost. In addition, as a result of the regularization formulation of matrix classificatio...
Cheng, Jiubing
2016-03-15
In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.
Low-rank coal research. Quarterly report, January--March 1990
Energy Technology Data Exchange (ETDEWEB)
1990-08-01
This document contains several quarterly progress reports for low-rank coal research that was performed from January-March 1990. Reports in Control Technology and Coal Preparation Research are in Flue Gas Cleanup, Waste Management, and Regional Energy Policy Program for the Northern Great Plains. Reports in Advanced Research and Technology Development are presented in Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Reports in Combustion Research cover Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Coal Fuels, Diesel Utilization of Low-Rank Coals, and Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications. Liquefaction Research is reported in Low-Rank Coal Direct Liquefaction. Gasification Research progress is discussed for Production of Hydrogen and By-Products from Coal and for Chemistry of Sulfur Removal in Mild Gas.
CO2 SEQUESTRATION POTENTIAL OF TEXAS LOW-RANK COALS
Energy Technology Data Exchange (ETDEWEB)
Duane A. McVay; Walter B. Ayers Jr.; Jerry L. Jensen
2003-10-01
The objectives of this project are to evaluate the feasibility of carbon dioxide (CO{sub 2}) sequestration in Texas low-rank coals and to determine the potential for enhanced coalbed methane (CBM) recovery as an added benefit of sequestration. The main objective for this reporting period was to further characterize the three areas selected as potential CO{sub 2} sequestration sites. Well-log data are critical for defining depth, thickness, number, and grouping of coal seams at the proposed sequestration sites. Thus, we purchased 12 hardcopy well logs (in addition to 15 well logs obtained during previous quarter) from a commercial source and digitized them to make coal-occurrence maps and cross sections. Detailed correlation of coal zones is important for reservoir analysis and modeling. Thus, we correlated and mapped Wilcox Group subdivisions--the Hooper, Simsboro and Calvert Bluff formations, as well as the coal-bearing intervals of the Yegua and Jackson formations in well logs. To assess cleat properties and describe coal characteristics, we made field trips to Big Brown and Martin Lake coal mines. This quarter we also received CO{sub 2} and methane sorption analyses of the Sandow Mine samples, and we are assessing the results. GEM, a compositional simulator developed by the Computer Modeling Group (CMG), was selected for performing the CO{sub 2} sequestration and enhanced CBM modeling tasks for this project. This software was used to conduct preliminary CO{sub 2} sequestration and methane production simulations in a 5-spot injection pattern. We are continuing to pursue a cooperative agreement with Anadarko Petroleum, which has already acquired significant relevant data near one of our potential sequestration sites.
CO2 Sequestration Potential of Texas Low-Rank Coals
Energy Technology Data Exchange (ETDEWEB)
Duane A. McVay; Walter B. Ayers Jr; Jerry L. Jensen
2003-07-01
The objective of this project is to evaluate the feasibility of carbon dioxide (CO{sub 2}) sequestration in Texas low-rank coals and to determine the potential for enhanced coalbed methane (CBM) recovery as an added benefit of sequestration. The main objectives for this reporting period were to further characterize the three areas selected as potential test sites, to begin assessing regional attributes of natural coal fractures (cleats), which control coalbed permeability, and to interview laboratories for coal sample testing. An additional objective was to initiate discussions with an operating company that has interests in Texas coalbed gas production and CO{sub 2} sequestration potential, to determine their interest in participation and cost sharing in this project. Well-log data are critical for defining depth, thickness, number, and grouping of coal seams at the proposed sequestration sites. Therefore, we purchased 15 well logs from a commercial source to make coal-occurrence maps and cross sections. Log suites included gamma ray (GR), self potential (SP), resistivity, sonic, and density curves. Other properties of the coals in the selected areas were collected from published literature. To assess cleat properties and describe coal characteristics, we made field trips to a Jackson coal outcrop and visited Wilcox coal exposures at the Sandow surface mine. Coal samples at the Sandow mine were collected for CO{sub 2} and methane sorption analyses. We contacted several laboratories that specialize in analyzing coals and selected a laboratory, submitting the Sandow Wilcox coals for analysis. To address the issue of cost sharing, we had fruitful initial discussions with a petroleum corporation in Houston. We reviewed the objectives and status of this project, discussed data that they have already collected, and explored the potential for cooperative data acquisition and exchange in the future. We are pursuing a cooperative agreement with them.
Fast and accurate generation method of PSF-based system matrix for PET reconstruction
Sun, Xiao-Li; Yun, Ming-Kai; Li, Dao-Wu; Gao, Juan; Li, Mo-Han; Chai, Pei; Tang, Hao-Hui; Zhang, Zhi-Ming; Wei, Long
2016-01-01
Positional single photon incidence response (P-SPIR) theory is researched in this paper to generate more accurate PSF-contained system matrix simply and quickly. The method has been proved highly effective to improve the spatial resolution by applying to the Eplus-260 primate PET designed by the Institute of High Energy Physics of the Chinese Academy of Sciences(IHEP). Simultaneously, to meet the clinical needs, GPU acceleration is put to use. Basically, P-SPIR theory takes both incidence angle and incidence position by crystal subdivision instead of only incidence angle into consideration based on Geant4 Application for Emission Tomography (GATE). The simulation conforms to the actual response distribution and can be completed rapidly within less than 1s. Furthermore,two-block penetration and normalization of the response probability are raised to fit the reality. With PSF obtained, the homogenization model is analyzed to calculate the spread distribution of bins within a few minutes for system matrix genera...
Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang
2017-07-01
It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the
CO2 Sequestration Potential of Texas Low-Rank Coals
Energy Technology Data Exchange (ETDEWEB)
Duane A. McVay; Walter B. Ayers Jr; Jerry L. Jensen
2005-10-01
The objectives of this project are to evaluate the feasibility of carbon dioxide (CO{sub 2}) sequestration in Texas low-rank coals and to determine the potential for enhanced coalbed methane (ECBM) recovery as an added benefit of sequestration. The main objectives for this reporting period were to perform reservoir simulation and economic sensitivity studies to (1) determine the effects of injection gas composition, (2) determine the effects of injection rate, and (3) determine the effects of coal dewatering prior to CO{sub 2} injection on CO{sub 2} sequestration in the Lower Calvert Bluff Formation (LCB) of the Wilcox Group coals in east-central Texas. To predict CO{sub 2} sequestration and ECBM in LCB coal beds for these three sensitivity studies, we constructed a 5-spot pattern reservoir simulation model and selected reservoir parameters representative of a typical depth, approximately 6,200-ft, of potential LCB coalbed reservoirs in the focus area of East-Central Texas. Simulation results of flue gas injection (13% CO{sub 2} - 87% N{sub 2}) in an 80-acre 5-spot pattern (40-ac well spacing) indicate that LCB coals with average net thickness of 20 ft can store a median value of 0.46 Bcf of CO{sub 2} at depths of 6,200 ft, with a median ECBM recovery of 0.94 Bcf and median CO{sub 2} breakthrough time of 4,270 days (11.7 years). Simulation of 100% CO{sub 2} injection in an 80-acre 5-spot pattern indicated that these same coals with average net thickness of 20 ft can store a median value of 1.75 Bcf of CO{sub 2} at depths of 6,200 ft with a median ECBM recovery of 0.67 Bcf and median CO{sub 2} breakthrough time of 1,650 days (4.5 years). Breakthrough was defined as the point when CO{sub 2} comprised 5% of the production stream for all cases. The injection rate sensitivity study for pure CO{sub 2} injection in an 80-acre 5-spot pattern at 6,200-ft depth shows that total volumes of CO{sub 2} sequestered and methane produced do not have significant sensitivity to
Low Rank Alternating Direction Method of Multipliers Reconstruction for MR Fingerprinting
Assländer, Jakob; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2016-01-01
Purpose The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for Magnetic Resonance Fingerprinting (MRF). Methods Based on a singular value decomposition (SVD) of the signal evolution, MRF is formulated as a low rank inverse problem in which one image is reconstructed for each singular value under consideration. This low rank approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the low rank approximation improves the conditioning of the problem, which is further improved by extending the low rank inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers (ADMM). The root mean square error and the noise propagation are analyzed in simulations. For verification, an in vivo example is provided. Results The proposed low rank ADMM approach shows a reduced root mean square error compared to the original fingerprinting reconstructi...
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
Ni, Yuzhao; Yuan, Xiaotong; Yan, Shuicheng; Cheong, Loong-Fah
2010-01-01
Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group) {Throughout the paper, we use segmentation, clustering, and grouping, and their verb forms, interchangeably.} high-dimensional structural data such as those (approximately) lying on subspaces {We follow \\cite{liu2010robust} and use the term "subspace" to denote both linear subspaces and affine subspaces. There is a trivial conversion between linear subspaces and affine subspaces as mentioned therein.} or low-dimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that post-processes the affinity for SC input. Hence we advocate to enforce...
Accurate high-harmonic spectra from time-dependent two-particle reduced density matrix theory
Lackner, Fabian; Sato, Takeshi; Ishikawa, Kenichi L; Burgdörfer, Joachim
2016-01-01
The accurate description of the non-linear response of many-electron systems to strong-laser fields remains a major challenge. Methods that bypass the unfavorable exponential scaling with particle number are required to address larger systems. In this paper we present a fully three-dimensional implementation of the time-dependent two-particle reduced density matrix (TD-2RDM) method for many-electron atoms. We benchmark this approach by a comparison with multi-configurational time-dependent Hartree-Fock (MCTDHF) results for the harmonic spectra of beryllium and neon. We show that the TD-2RDM is very well-suited to describe the non-linear atomic response and to reveal the influence of electron-correlation effects.
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices
Lin, Zhouchen; Wu, Leqin; Ma, Yi
2010-01-01
This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the $\\ell^1$-norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) ...
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
Low-Rank and Joint Sparse Representations for Multi-Modal Recognition.
Zhang, Heng; Patel, Vishal M; Chellappa, Rama
2017-10-01
We propose multi-task and multivariate methods for multi-modal recognition based on low-rank and joint sparse representations. Our formulations can be viewed as generalized versions of multivariate low-rank and sparse regression, where sparse and low-rank representations across all modalities are imposed. One of our methods simultaneously couples information within different modalities by enforcing the common low-rank and joint sparse constraints among multi-modal observations. We also modify our formulations by including an occlusion term that is assumed to be sparse. The alternating direction method of multipliers is proposed to efficiently solve the resulting optimization problems. Extensive experiments on three publicly available multi-modal biometrics and object recognition data sets show that our methods compare favorably with other feature-level fusion methods.
Distillability and PPT entanglement of low-rank quantum states
Energy Technology Data Exchange (ETDEWEB)
Chen Lin [Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117542 (Singapore); Dokovic, Dragomir Z, E-mail: cqtcl@nus.edu.sg, E-mail: djokovic@uwaterloo.ca [Department of Pure Mathematics and Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2 L 3G1 (Canada)
2011-07-15
The bipartite quantum states {rho}, with rank strictly smaller than the maximum of the ranks of the reduced states {rho}{sub A} and {rho}{sub B}, are distillable by local operations and classical communication (Horodecki P, Smolin J A, Terhal B M and Thapliyal A V 2003 Theor. Comput. Sci. 292 589-96; 1999 arXiv:quant-ph/9910122). Our first main result is that this is also true for NPT states with rank equal to this maximum. (A state is PPT if the partial transpose of its density matrix is positive semidefinite, and otherwise it is NPT.) This was conjectured first in 1999 in the special case when the ranks of {rho}{sub A} and {rho}{sub B} are equal (see (Horodecki P, Smolin J A, Terhal B M and Thapliyal A V 2003 Theor. Comput. Sci. 292 589-96; 1999 arXiv:quant-ph/9910122). Our second main result provides a complete solution of the separability problem for bipartite states of rank 4. Namely, we show that such a state is separable if and only if it is PPT and its range contains at least one product state. We also prove that the so-called checkerboard states are distillable if and only if they are NPT.
Distillability and PPT entanglement of low-rank quantum states
Chen, Lin
2011-01-01
It is well-known that bipartite quantum states, whose rank is strictly smaller than the maximum of the ranks of the reduced states, is 1-distillable by local operations and classical communication. Our first main result is that this is also true for states with rank equal to this maximum. This was conjectured in 1999 in the special case when the two local ranks are equal. From our main result we obtain a new constraint on the monogamy of entanglement: a tripartite pure state cannot have two entangled undistillable reduced bipartite density operators. We also prove that the so called checkerboard states are 1-distillable if and only if they are NPT, i.e., the partial transpose of the density matrix is not positive semidefinite. On the basis of this proof, we derive our second main result. Namely, bipartite states of rank 4 which are also PPT, i.e., have positive semidefinite partial transpose, are separable if and only their range contains a product state. This provides a complete solution of the separability ...
Thermolysis of phenethyl phenyl ether: A model of ether linkages in low rank coal
Energy Technology Data Exchange (ETDEWEB)
Britt, P.F.; Buchanan, A.C. III; Malcolm, E.A.
1994-09-01
Currently, an area of interest and frustration for coal chemists has been the direct liquefaction of low rank coal. Although low rank coals are more reactive than bituminous coals, they are more difficult to liquefy and offer lower liquefaction yields under conditions optimized for bituminous coals. Solomon, Serio, and co-workers have shown that: in the pyrolysis and liquefaction of low rank coals, a low temperature cross-linking reaction associated with oxygen functional groups occurs before tar evolution. A variety of pretreatments (demineralization, alkylation, and ion-exchange) have been shown to reduce these retrogressive reactions and increase tar yields, but the actual chemical reactions responsible for these processes have not been defined. In order to gain insight into the thermochemical reactions leading to cross-linking in low rank coal, we have undertaken a study of the pyrolysis of oxygen containing coal model compounds. Solid state NMR studies suggest that the alkyl aryl ether linkage may be present in modest amounts in low rank coal. Therefore, in this paper, we will investigate the thermolysis of phenethyl phenyl ether (PPE) as a model of 0-aryl ether linkages found in low rank coal, lignites, and lignin, an evolutionary precursor of coal. Our results have uncovered a new reaction channel that can account for 25% of the products formed. The impact of reaction conditions, including restricted mass transport, on this new reaction pathway and the role of oxygen functional groups in cross-linking reactions will be investigated.
Low-rank coal research, Task 5.1. Topical report, April 1986--December 1992
Energy Technology Data Exchange (ETDEWEB)
1993-02-01
This document is a topical progress report for Low-Rank Coal Research performed April 1986 - December 1992. Control Technology and Coal Preparation Research is described for Flue Gas Cleanup, Waste Management, Regional Energy Policy Program for the Northern Great Plains, and Hot-Gas Cleanup. Advanced Research and Technology Development was conducted on Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Combustion Research is described for Atmospheric Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Fuels (completed 10/31/90), Diesel Utilization of Low-Rank Coals (completed 12/31/90), Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications (completed 10/31/90), Nitrous Oxide Emission, and Pressurized Fluidized-Bed Combustion. Liquefaction Research in Low-Rank Coal Direct Liquefaction is discussed. Gasification Research was conducted in Production of Hydrogen and By-Products from Coals and in Sulfur Forms in Coal.
Low-rank coal study. Volume 4. Regulatory, environmental, and market analyses
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
The regulatory, environmental, and market constraints to development of US low-rank coal resources are analyzed. Government-imposed environmental and regulatory requirements are among the most important factors that determine the markets for low-rank coal and the technology used in the extraction, delivery, and utilization systems. Both state and federal controls are examined, in light of available data on impacts and effluents associated with major low-rank coal development efforts. The market analysis examines both the penetration of existing markets by low-rank coal and the evolution of potential markets in the future. The electric utility industry consumes about 99 percent of the total low-rank coal production. This use in utility boilers rose dramatically in the 1970's and is expected to continue to grow rapidly. In the late 1980's and 1990's, industrial direct use of low-rank coal and the production of synthetic fuels are expected to start growing as major new markets.
The potential of more accurate InSAR covariance matrix estimation for land cover mapping
Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin
2017-04-01
Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.
Accurate upper-lower bounds on homogenized matrix by FFT-based Galerkin method
Vondřejc, Jaroslav; Marek, Ivo
2014-01-01
Accurate upper-lower bounds on homogenized matrix, arising from the unit cell problem for periodic media, are calculated for a scalar elliptic setting. Our approach builds on the recent variational reformulation of the Moulinec-Suquet (1994) Fast Fourier Transform (FFT) homogenization scheme by Vond\\v{r}ejc et al. (2014), which is based on the conforming Galerkin approximation with trigonometric polynomials. Upper-lower bounds are obtained by adjusting the primal-dual finite element framework developed independently by Dvo\\v{r}\\'{a}k (1993) and Wi\\c{e}ckowski (1995) to the FFT-based Galerkin setting. We show that the discretization procedure differs for odd and non-odd number of discretization points. In particular, thanks to the Helmholtz decomposition inherited from the continuous formulation, the duality structure is fully recovered for odd discretization. In the latter case, the more complex primal-dual structure is observed due to the trigonometric polynomials associated with the Nyquist frequencies. The...
Fast and accurate generation method of PSF-based system matrix for PET reconstruction
Sun, Xiao-Li; Liu, Shuang-Quan; Yun, Ming-Kai; Li, Dao-Wu; Gao, Juan; Li, Mo-Han; Chai, Pei; Tang, Hao-Hui; Zhang, Zhi-Ming; Wei, Long
2017-04-01
This work investigates the positional single photon incidence response (P-SPIR) to provide an accurate point spread function (PSF)-contained system matrix and its incorporation within the image reconstruction framework. Based on the Geant4 Application for Emission Tomography (GATE) simulation, P-SPIR theory takes both incidence angle and incidence position of the gamma photon into account during crystal subdivision, instead of only taking the former into account, as in single photon incidence response (SPIR). The response distribution obtained in this fashion was validated using Monte Carlo simulations. In addition, two-block penetration and normalization of the response probability are introduced to improve the accuracy of the PSF. With the incorporation of the PSF, the homogenization model is then analyzed to calculate the spread distribution of each line-of-response (LOR). A primate PET scanner, Eplus-260, developed by the Institute of High Energy Physics, Chinese Academy of Sciences (IHEP), was employed to evaluate the proposed method. The reconstructed images indicate that the P-SPIR method can effectively mitigate the depth-of-interaction (DOI) effect, especially at the peripheral area of field-of-view (FOV). Furthermore, the method can be applied to PET scanners with any other structures and list-mode data format with high flexibility and efficiency. Supported by National Natural Science Foundation of China (81301348) and China Postdoctoral Science Foundation (2015M570154)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
Low-rank coal study : national needs for resource development. Volume 2. Resource characterization
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
Comprehensive data are presented on the quantity, quality, and distribution of low-rank coal (subbituminous and lignite) deposits in the United States. The major lignite-bearing areas are the Fort Union Region and the Gulf Lignite Region, with the predominant strippable reserves being in the states of North Dakota, Montana, and Texas. The largest subbituminous coal deposits are in the Powder River Region of Montana and Wyoming, The San Juan Basin of New Mexico, and in Northern Alaska. For each of the low-rank coal-bearing regions, descriptions are provided of the geology; strippable reserves; active and planned mines; classification of identified resources by depth, seam thickness, sulfur content, and ash content; overburden characteristics; aquifers; and coal properties and characteristics. Low-rank coals are distinguished from bituminous coals by unique chemical and physical properties that affect their behavior in extraction, utilization, or conversion processes. The most characteristic properties of the organic fraction of low-rank coals are the high inherent moisture and oxygen contents, and the correspondingly low heating value. Mineral matter (ash) contents and compositions of all coals are highly variable; however, low-rank coals tend to have a higher proportion of the alkali components CaO, MgO, and Na/sub 2/O. About 90% of the reserve base of US low-rank coal has less than one percent sulfur. Water resources in the major low-rank coal-bearing regions tend to have highly seasonal availabilities. Some areas appear to have ample water resources to support major new coal projects; in other areas such as Texas, water supplies may be constraining factor on development.
Harris, Stephen H.; Smith, Richard L.; Barker, Charles E.
2008-01-01
Lignite and subbituminous coals were investigated for their ability to support microbial methane production in laboratory incubations. Results show that naturally-occurring microorganisms associated with the coals produced substantial quantities of methane, although the factors influencing this process were variable among different samples tested. Methanogenic microbes in two coals from the Powder River Basin, Wyoming, USA, produced 140.5-374.6 mL CH4/kg ((4.5-12.0 standard cubic feet (scf)/ton) in response to an amendment of H2/CO2. The addition of high concentrations (5-10 mM) of acetate did not support substantive methane production under the laboratory conditions. However, acetate accumulated in control incubations where methanogenesis was inhibited, indicating that acetate was produced and consumed during the course of methane production. Acetogenesis from H2/CO2 was evident in these incubations and may serve as a competing metabolic mode influencing the cumulative amount of methane produced in coal. Two low-rank (lignite A) coals from Fort Yukon, Alaska, USA, demonstrated a comparable level of methane production (131.1-284.0 mL CH4/kg (4.2-9.1 scf/ton)) in the presence of an inorganic nutrient amendment, indicating that the source of energy and organic carbon was derived from the coal. The concentration of chloroform-extractable organic matter varied by almost three orders of magnitude among all the coals tested, and appeared to be related to methane production potential. These results indicate that substrate availability within the coal matrix and competition between different groups of microorganisms are two factors that may exert a profound influence on methanogenesis in subsurface coal beds.
CT image sequence restoration based on sparse and low-rank decomposition.
Directory of Open Access Journals (Sweden)
Shuiping Gou
Full Text Available Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA, Linearized Alternating Direction Method with Adaptive Penalty (LADMAP and Go Decomposition (GoDec. Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhenyue [Zhejiang Univ., Hangzhou (People' s Republic of China); Zha, Hongyuan [Pennsylvania State Univ., University Park, PA (United States); Simon, Horst [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
Low-rank and eigenface based sparse representation for face recognition.
Directory of Open Access Journals (Sweden)
Yi-Fu Hou
Full Text Available In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC. Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA to alleviate the influence of noises (e.g., illumination difference and occlusions. Secondly, Singular Value Decomposition (SVD is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method.
Low-rank coal study: national needs for resource development. Volume 3. Technology evaluation
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
Technologies applicable to the development and use of low-rank coals are analyzed in order to identify specific needs for research, development, and demonstration (RD and D). Major sections of the report address the following technologies: extraction; transportation; preparation, handling and storage; conventional combustion and environmental control technology; gasification; liquefaction; and pyrolysis. Each of these sections contains an introduction and summary of the key issues with regard to subbituminous coal and lignite; description of all relevant technology, both existing and under development; a description of related environmental control technology; an evaluation of the effects of low-rank coal properties on the technology; and summaries of current commercial status of the technology and/or current RD and D projects relevant to low-rank coals.
Low-rank coal study. Volume 5. RD and D program evaluation
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
A national program is recommended for research, development, and demonstration (RD and D) of improved technologies for the enviromentally acceptable use of low-rank coals. RD and D project recommendations are outlined in all applicable technology areas, including extraction, transportation, preparation, handling and storage, conventional combustion and environmental control technology, fluidized bed combustion, gasification, liquefaction, and pyrolysis. Basic research topics are identified separately, as well as a series of crosscutting research activities addressing environmental, economic, and regulatory issues. The recommended RD and D activities are classified into Priority I and Priority II categories, reflecting their relative urgency and potential impact on the advancement of low-rank coal development. Summaries of ongoing research projects on low-rank coals in the US are presented in an Appendix, and the relationships of these ongoing efforts to the recommended RD and D program are discussed.
Low-rank and eigenface based sparse representation for face recognition.
Hou, Yi-Fu; Sun, Zhan-Li; Chong, Yan-Wen; Zheng, Chun-Hou
2014-01-01
In this paper, based on low-rank representation and eigenface extraction, we present an improvement to the well known Sparse Representation based Classification (SRC). Firstly, the low-rank images of the face images of each individual in training subset are extracted by the Robust Principal Component Analysis (Robust PCA) to alleviate the influence of noises (e.g., illumination difference and occlusions). Secondly, Singular Value Decomposition (SVD) is applied to extract the eigenfaces from these low-rank and approximate images. Finally, we utilize these eigenfaces to construct a compact and discriminative dictionary for sparse representation. We evaluate our method on five popular databases. Experimental results demonstrate the effectiveness and robustness of our method.
Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong
2009-09-01
To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.
Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion
Directory of Open Access Journals (Sweden)
Kan Ren
2014-01-01
Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.
Exact Low-Rank Matrix Completion from Sparsely Corrupted Entries via Adaptive Outlier Pursuit
2012-05-02
Recognition. Society for Industrial and Applied Mathematics, Philadephia, PA, 2007. 1 [10] M Fazel, H Hindi , and S P Boyd. A rank minimization heuristic with...Riemannian approach. In 28th International Conference on Machine Learning (ICML), 2011. 2 [20] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo
Transportation costs for new fuel forms produced from low rank US coals
Energy Technology Data Exchange (ETDEWEB)
Newcombe, R.J.; McKelvey, D.G. (TMS, Inc., Germantown, MD (USA)); Ruether, J.A. (USDOE Pittsburgh Energy Technology Center, PA (USA))
1990-09-01
Transportation costs are examined for four types of new fuel forms (solid, syncrude, methanol, and slurry) produced from low rank coals found in the lower 48 states of the USA. Nine low rank coal deposits are considered as possible feedstocks for mine mouth processing plants. Transportation modes analyzed include ship/barge, pipelines, rail, and truck. The largest potential market for the new fuel forms is coal-fired utility boilers without emission controls. Lowest cost routes from each of the nine source regions to supply this market are determined. 12 figs.
Mazziotti, David A
2016-10-07
A central challenge of physics is the computation of strongly correlated quantum systems. The past ten years have witnessed the development and application of the variational calculation of the two-electron reduced density matrix (2-RDM) without the wave function. In this Letter we present an orders-of-magnitude improvement in the accuracy of 2-RDM calculations without an increase in their computational cost. The advance is based on a low-rank, dual formulation of an important constraint on the 2-RDM, the T2 condition. Calculations are presented for metallic chains and a cadmium-selenide dimer. The low-scaling T2 condition will have significant applications in atomic and molecular, condensed-matter, and nuclear physics.
Mazziotti, David A.
2016-10-01
A central challenge of physics is the computation of strongly correlated quantum systems. The past ten years have witnessed the development and application of the variational calculation of the two-electron reduced density matrix (2-RDM) without the wave function. In this Letter we present an orders-of-magnitude improvement in the accuracy of 2-RDM calculations without an increase in their computational cost. The advance is based on a low-rank, dual formulation of an important constraint on the 2-RDM, the T 2 condition. Calculations are presented for metallic chains and a cadmium-selenide dimer. The low-scaling T 2 condition will have significant applications in atomic and molecular, condensed-matter, and nuclear physics.
Low-rank coal research: Volume 2, Advanced research and technology development: Final report
Energy Technology Data Exchange (ETDEWEB)
Mann, M.D.; Swanson, M.L.; Benson, S.A.; Radonovich, L.; Steadman, E.N.; Sweeny, P.G.; McCollor, D.P.; Kleesattel, D.; Grow, D.; Falcone, S.K.
1987-04-01
Volume II contains articles on advanced combustion phenomena, combustion inorganic transformation; coal/char reactivity; liquefaction reactivity of low-rank coals, gasification ash and slag characterization, and fine particulate emissions. These articles have been entered individually into EDB and ERA. (LTN)
Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning.
Peng, Yong; Lu, Bao-Liang; Wang, Suhang
2015-05-01
Constructing an informative and discriminative graph plays an important role in various pattern recognition tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral clustering and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled samples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global information emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-05-04
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Influence of magma intrusion on gas outburst in a low rank coal mine
Institute of Scientific and Technical Information of China (English)
Chen Shangbin; Zhu Yanming; Li Wu; Wang Hui
2012-01-01
belts on the other,as well as the unaffected coal seam itself,trap a large amount of gas during the thermal activity.This is the basic reason for gas outburst.These conclusions can enlighten activities related to gas prevention and control in a low rank coal mine affected by magma intrusion.
Alborzpour, Jonathan P.; Tew, David P.; Habershon, Scott
2016-11-01
Solution of the time-dependent Schrödinger equation using a linear combination of basis functions, such as Gaussian wavepackets (GWPs), requires costly evaluation of integrals over the entire potential energy surface (PES) of the system. The standard approach, motivated by computational tractability for direct dynamics, is to approximate the PES with a second order Taylor expansion, for example centred at each GWP. In this article, we propose an alternative method for approximating PES matrix elements based on PES interpolation using Gaussian process regression (GPR). Our GPR scheme requires only single-point evaluations of the PES at a limited number of configurations in each time-step; the necessity of performing often-expensive evaluations of the Hessian matrix is completely avoided. In applications to 2-, 5-, and 10-dimensional benchmark models describing a tunnelling coordinate coupled non-linearly to a set of harmonic oscillators, we find that our GPR method results in PES matrix elements for which the average error is, in the best case, two orders-of-magnitude smaller and, in the worst case, directly comparable to that determined by any other Taylor expansion method, without requiring additional PES evaluations or Hessian matrices. Given the computational simplicity of GPR, as well as the opportunities for further refinement of the procedure highlighted herein, we argue that our GPR methodology should replace methods for evaluating PES matrix elements using Taylor expansions in quantum dynamics simulations.
Matrix Factorization and Matrix Concentration
Mackey, Lester
2012-01-01
Motivated by the constrained factorization problems of sparse principal components analysis (PCA) for gene expression modeling, low-rank matrix completion for recommender systems, and robust matrix factorization for video surveillance, this dissertation explores the modeling, methodology, and theory of matrix factorization.We begin by exposing the theoretical and empirical shortcomings of standard deflation techniques for sparse PCA and developing alternative methodology more suitable for def...
Directory of Open Access Journals (Sweden)
Fan Meng
Full Text Available This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the l(1-norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image.
Fast low-rank approximations of multidimensional integrals in ion-atomic collisions modelling
Litsarev, M S
2015-01-01
An efficient technique based on low-rank separated approximations is proposed for computation of three-dimensional integrals arising in the energy deposition model that describes ion-atomic collisions. Direct tensor-product quadrature requires grids of size $4000^3$ which is unacceptable. Moreover, several of such integrals have to be computed simultaneously for different values of parameters. To reduce the complexity, we use the structure of the integrand and apply numerical linear algebra techniques for the construction of low-rank approximation. The resulting algorithm is $10^3$ faster than spectral quadratures in spherical coordinates used in the original DEPOSIT code. The approach can be generalized to other multidimensional problems in physics.
Task 27 -- Alaskan low-rank coal-water fuel demonstration project
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-10-01
Development of coal-water-fuel (CWF) technology has to-date been predicated on the use of high-rank bituminous coal only, and until now the high inherent moisture content of low-rank coal has precluded its use for CWF production. The unique feature of the Alaskan project is the integration of hot-water-drying (HWD) into CWF technology as a beneficiation process. Hot-water-drying is an EERC developed technology unavailable to the competition that allows the range of CWF feedstock to be extended to low-rank coals. The primary objective of the Alaskan Project, is to promote interest in the CWF marketplace by demonstrating the commercial viability of low-rank coal-water-fuel (LRCWF). While commercialization plans cannot be finalized until the implementation and results of the Alaskan LRCWF Project are known and evaluated, this report has been prepared to specifically address issues concerning business objectives for the project, and outline a market development plan for meeting those objectives.
The application of low-rank and sparse decomposition method in the field of climatology
Gupta, Nitika; Bhaskaran, Prasad K.
2017-03-01
The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.
Removal of silver(I) from aqueous solutions with low-rank Turkish coals
Energy Technology Data Exchange (ETDEWEB)
Karabakan, A.; Karabulut, S.; Denizli, A.; Yurum, Y. [University of Hacettepe, Ankara (Turkey). Dept. of Chemistry
2004-07-01
The removal of silver ions from aqueous solutions containing low-to-moderate levels of contamination using Turkish Beypazari low-rank coal was investigated. Carboxylic acid and phenolic hydroxyl functional groups present on the coal surface provided adsorption sites for the removal of silver ions from solution via ion exchange. The equilibrium pH of the coal/solution mixture was shown to be the principal factor controlling the extent of recovery of Ag{sup +} ions from aqueous solutions. The optimum pH was measured as 4.0 and it was found that the maximum removal of silver from solution was achieved within 30 min. The maximum adsorption capacity of the Ag{sup +} ions was 1.87 mg/g coal. The adsorption phenomena appeared to follow a typical Langmuir isotherm. It was observed that the use of low-rank coal was considerably more effective in the recovery Ag{sup +} ions from aqueous solutions. Higher amounts of adsorbed Ag{sup +} ions could be desorbed (up to 92%) using 25 mM EDTA. Low-rank Turkish coals were suitable for consecutive use for more than 10 cycles without significant loss of adsorption capacity.
Energy Technology Data Exchange (ETDEWEB)
Wiltsee, Jr., G. A.
1983-01-01
Progress reports are presented for the following tasks: (1) gasification wastewater treatment and reuse; (2) fine coal cleaning; (3) coal-water slurry preparation; (4) low-rank coal liquefaction; (5) combined flue gas cleanup/simultaneous SO/sub x/-NO/sub x/ control; (6) particulate control and hydrocarbons and trace element emissions from low-rank coals; (7) waste characterization; (8) combustion research and ash fowling; (9) fluidized-bed combustion of low-rank coals; (10) ash and slag characterization; (11) organic structure of coal; (12) distribution of inorganics in low-rank coals; (13) physical properties and moisture of low-rank coals; (14) supercritical solvent extraction; and (15) pyrolysis and devolatilization.
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
Effect of Water Invasion on Outburst Predictive Index of Low Rank Coals in Dalong Mine.
Jiang, Jingyu; Cheng, Yuanping; Mou, Junhui; Jin, Kan; Cui, Jie
2015-01-01
To improve the coal permeability and outburst prevention, coal seam water injection and a series of outburst prevention measures were tested in outburst coal mines. These methods have become important technologies used for coal and gas outburst prevention and control by increasing the external moisture of coal or decreasing the stress of coal seam and changing the coal pore structure and gas desorption speed. In addition, techniques have had a significant impact on the gas extraction and outburst prevention indicators of coal seams. Globally, low rank coals reservoirs account for nearly half of hidden coal reserves and the most obvious feature of low rank coal is the high natural moisture content. Moisture will restrain the gas desorption and will affect the gas extraction and accuracy of the outburst prediction of coals. To study the influence of injected water on methane desorption dynamic characteristics and the outburst predictive index of coal, coal samples were collected from the Dalong Mine. The methane adsorption/desorption test was conducted on coal samples under conditions of different injected water contents. Selective analysis assessed the variations of the gas desorption quantities and the outburst prediction index (coal cutting desorption index). Adsorption tests indicated that the Langmuir volume of the Dalong coal sample is ~40.26 m3/t, indicating a strong gas adsorption ability. With the increase of injected water content, the gas desorption amount of the coal samples decreased under the same pressure and temperature. Higher moisture content lowered the accumulation desorption quantity after 120 minutes. The gas desorption volumes and moisture content conformed to a logarithmic relationship. After moisture correction, we obtained the long-flame coal outburst prediction (cutting desorption) index critical value. This value can provide a theoretical basis for outburst prediction and prevention of low rank coal mines and similar occurrence conditions
Effect of Water Invasion on Outburst Predictive Index of Low Rank Coals in Dalong Mine.
Directory of Open Access Journals (Sweden)
Jingyu Jiang
Full Text Available To improve the coal permeability and outburst prevention, coal seam water injection and a series of outburst prevention measures were tested in outburst coal mines. These methods have become important technologies used for coal and gas outburst prevention and control by increasing the external moisture of coal or decreasing the stress of coal seam and changing the coal pore structure and gas desorption speed. In addition, techniques have had a significant impact on the gas extraction and outburst prevention indicators of coal seams. Globally, low rank coals reservoirs account for nearly half of hidden coal reserves and the most obvious feature of low rank coal is the high natural moisture content. Moisture will restrain the gas desorption and will affect the gas extraction and accuracy of the outburst prediction of coals. To study the influence of injected water on methane desorption dynamic characteristics and the outburst predictive index of coal, coal samples were collected from the Dalong Mine. The methane adsorption/desorption test was conducted on coal samples under conditions of different injected water contents. Selective analysis assessed the variations of the gas desorption quantities and the outburst prediction index (coal cutting desorption index. Adsorption tests indicated that the Langmuir volume of the Dalong coal sample is ~40.26 m3/t, indicating a strong gas adsorption ability. With the increase of injected water content, the gas desorption amount of the coal samples decreased under the same pressure and temperature. Higher moisture content lowered the accumulation desorption quantity after 120 minutes. The gas desorption volumes and moisture content conformed to a logarithmic relationship. After moisture correction, we obtained the long-flame coal outburst prediction (cutting desorption index critical value. This value can provide a theoretical basis for outburst prediction and prevention of low rank coal mines and similar
Desulphurisation of high moisture content fuel-gases derived from low-rank coals
Energy Technology Data Exchange (ETDEWEB)
Hodges, S.; Anderson, B. [HRL Technology, Mulgrave, Vic. (Australia); Abbasian, J.; Slimane, R.B. [Inst. of Gas Technology, Des Plaines, IL (United States)
1999-07-01
Regenerable sulphur sorbent materials have been developed specifically for fluidised-bed desulphurisation of high moisture content fuel-gases derived from the gasification of low-rank coals. Selection of the most appropriate sorbents was based on thermodynamic limitations, strength/attrition resistance, reactivity and sulphur capacity. Pilot-scale tests showed that sorbents based on iron and copper were able to reduce the level of H{sub 2}S in the fuel-gas (up to 2.5 MPa, 350 C) from about 3000 ppmv to less than 100 ppmv. (orig.)
On matrices with low-rank-plus-shift structure: Partial SVD and latent semantic indexing
Energy Technology Data Exchange (ETDEWEB)
Zha, H.; Zhang, Z.
1998-08-01
The authors present a detailed analysis of matrices satisfying the so-called low-rank-plus-shift property in connection with the computation of their partial singular value decomposition. The application they have in mind is Latent Semantic Indexing for information retrieval where the term-document matrices generated from a text corpus approximately satisfy this property. The analysis is motivated by developing more efficient methods for computing and updating partial SVD of large term-document matrices and gaining deeper understanding of the behavior of the methods in the presence of noise.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-08
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-06
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.
Sampling and Low-Rank Tensor Approximation of the Response Surface
Litvinenko, Alexander
2013-01-01
Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.
Directory of Open Access Journals (Sweden)
Rajive Ganguli
2012-01-01
Full Text Available The impact of particle size distribution (PSD of pulverized, low rank high volatile content Alaska coal on combustion related power plant performance was studied in a series of field scale tests. Performance was gauged through efficiency (ratio of megawatt generated to energy consumed as coal, emissions (SO2, NOx, CO, and carbon content of ash (fly ash and bottom ash. The study revealed that the tested coal could be burned at a grind as coarse as 50% passing 76 microns, with no deleterious impact on power generation and emissions. The PSD’s tested in this study were in the range of 41 to 81 percent passing 76 microns. There was negligible correlation between PSD and the followings factors: efficiency, SO2, NOx, and CO. Additionally, two tests where stack mercury (Hg data was collected, did not demonstrate any real difference in Hg emissions with PSD. The results from the field tests positively impacts pulverized coal power plants that burn low rank high volatile content coals (such as Powder River Basin coal. These plants can potentially reduce in-plant load by grinding the coal less (without impacting plant performance on emissions and efficiency and thereby, increasing their marketability.
Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.
Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen
2016-07-07
Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.
Somerville, W R C; Ru, E C Le
2015-01-01
We provide a detailed user guide for SMARTIES, a suite of Matlab codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a Matlab implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarised, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for...
Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi
2017-02-22
One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior
2004-04-30
This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Argillon GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, the available data from laboratory, pilot and full-scale SCR units was reviewed, leading to hypotheses about the mechanism for mercury oxidation by SCR catalysts.
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior
2004-10-29
This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Argillon GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, a model of Hg oxidation across SCRs was formulated based on full-scale data. The model took into account the effects of temperature, space velocity, catalyst type and HCl concentration in the flue gas.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Discovering low-rank shared concept space for adapting text mining models.
Chen, Bo; Lam, Wai; Tsang, Ivor W; Wong, Tak-Lam
2013-06-01
We propose a framework for adapting text mining models that discovers low-rank shared concept space. Our major characteristic of this concept space is that it explicitly minimizes the distribution gap between the source domain with sufficient labeled data and the target domain with only unlabeled data, while at the same time it minimizes the empirical loss on the labeled data in the source domain. Our method is capable of conducting the domain adaptation task both in the original feature space as well as in the transformed Reproducing Kernel Hilbert Space (RKHS) using kernel tricks. Theoretical analysis guarantees that the error of our adaptation model can be bounded with respect to the embedded distribution gap and the empirical loss in the source domain. We have conducted extensive experiments on two common text mining problems, namely, document classification and information extraction, to demonstrate the efficacy of our proposed framework.
High-dimensional statistical inference: From vector to matrix
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA nuclear norm minimization. Moreover, for any epsilon > 0, delta kA nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature
Extracellular oxidases and the transformation of solubilised low-rank coal by wood-rot fungi
Energy Technology Data Exchange (ETDEWEB)
Ralph, J.P. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Graham, L.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Catcheside, D.E.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences
1996-12-31
The involvement of extracellular oxidases in biotransformation of low-rank coal was assessed by correlating the ability of nine white-rot and brown-rot fungi to alter macromolecular material in alkali-solubilised brown coal with the spectrum of oxidases they produce when grown on low-nitrogen medium. The coal fraction used was that soluble at 3.0{<=}pH{<=}6.0 (SWC6 coal). In 15-ml cultures, Gloeophyllum trabeum, Lentinus lepideus and Trametes versicolor produced little or no lignin peroxidase, manganese (Mn) peroxidase or laccase activity and caused no change to SWC6 coal. Ganoderma applanatum and Pycnoporus cinnabarinus also produced no detectable lignin or Mn peroxidases or laccase yet increased the absorbance at 400 nm of SWC6 coal. G. applanatum, which produced veratryl alcohol oxidase, also increased the modal apparent molecular mass. SWC6 coal exposed to Merulius tremellosus and Perenniporia tephropora, which secreted Mn peroxidases and laccase and Phanerochaete chrysosporium, which produced Mn and lignin peroxidases was polymerised but had unchanged or decreased absorbance. In the case of both P. chrysosporium and M. tremellosus, polymerisation of SWC6 coal was most extensive, leading to the formation of a complex insoluble in 100 mM NaOH. Rigidoporus ulmarius, which produced only laccase, both polymerised and reduced the A{sub 400} of SWC6 coal. P. chrysosporium, M. tremellosus and P. tephropora grown in 10-ml cultures produced a spectrum of oxidases similar to that in 15-ml cultures but, in each case, caused more extensive loss of A{sub 400}, and P. chrysosporium depolymerised SWC6 coal. It is concluded that the extracellular oxidases of white-rot fungi can transform low-rank coal macromolecules and that increased oxygen availability in the shallower 10-ml cultures favours catabolism over polymerisation. (orig.)
Suyama, Takayuki
2016-01-01
This paper proposes a novel fixed low-rank spatial filter estimation for brain computer interface (BCI) systems with an application that recognizes emotions elicited by movies. The proposed approach unifies such tasks as feature extraction, feature selection, and classification, which are often independently tackled in a “bottom-up” manner, under a regularized loss minimization problem. The loss function is explicitly derived from the conventional BCI approach and solves its minimization by optimization with a nonconvex fixed low-rank constraint. For evaluation, an experiment was conducted to induce emotions by movies for dozens of young adult subjects and estimated the emotional states using the proposed method. The advantage of the proposed method is that it combines feature selection, feature extraction, and classification into a monolithic optimization problem with a fixed low-rank regularization, which implicitly estimates optimal spatial filters. The proposed method shows competitive performance against the best CSP-based alternatives. PMID:27597862
Directory of Open Access Journals (Sweden)
Ken Yano
2016-01-01
Full Text Available This paper proposes a novel fixed low-rank spatial filter estimation for brain computer interface (BCI systems with an application that recognizes emotions elicited by movies. The proposed approach unifies such tasks as feature extraction, feature selection, and classification, which are often independently tackled in a “bottom-up” manner, under a regularized loss minimization problem. The loss function is explicitly derived from the conventional BCI approach and solves its minimization by optimization with a nonconvex fixed low-rank constraint. For evaluation, an experiment was conducted to induce emotions by movies for dozens of young adult subjects and estimated the emotional states using the proposed method. The advantage of the proposed method is that it combines feature selection, feature extraction, and classification into a monolithic optimization problem with a fixed low-rank regularization, which implicitly estimates optimal spatial filters. The proposed method shows competitive performance against the best CSP-based alternatives.
Pra Desain Pabrik Substitute Natural Gas (SNG dari Low Rank Coal
Directory of Open Access Journals (Sweden)
Asti Permatasari
2014-09-01
rendah dan sedang yang sangat banyak, yaitu masing-masing sebesar 2.426,00 juta ton dan 186,00 juta ton. Maka dari itu, pabrik SNG dari low rank coal ini akan didirikan di Kecamatan Ilir Timur, Sumatera Selatan. Rencananya pabrik ini akan didirikan pada tahun 2016 dan siap beroperasi pada tahun 2018. Diperkirakan konsumsi gas alam pada tahun 2018 sebesar 906.599,3 MMSCF sehingga pendirian pabrik yang baru diharapkan dapat menggantikan kebutuhan gas alam sebesar 4% di Indonesia, yaitu sebanyak 36.295,502 MMSCF per tahun atau sebesar 109.986 MMSCFD. Proses pembuatan SNG dari low rank coal terdiri dari empat proses utama, yaitu coal preparation, gasifikasi, gas cleaning, dan metanasi. Dari analisa perhitungan ekonomi didapat Investasi 823.947.924 USD, IRR sebesar 13,12%, POT selama 5 tahun, dan BEP sebesar 68,55%.
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior
2004-07-30
This is the sixth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Argillon GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, a review of the available data on mercury oxidation across SCR catalysts from small, laboratory-scale experiments, pilot-scale slipstream reactors and full-scale power plants was carried out. Data from small-scale reactors obtained with both simulated flue gas and actual coal combustion flue gas demonstrated the importance of temperature, ammonia, space velocity and chlorine on mercury oxidation across SCR catalyst. SCR catalysts are, under certain circumstances, capable of driving mercury speciation toward the gas-phase equilibrium values at SCR temperatures. Evidence suggests that mercury does not always reach equilibrium at the outlet. There may be other factors that become apparent as more data become available.
Energy Technology Data Exchange (ETDEWEB)
Gauntt, R. O.; DeOtte, R. E.; Slowey, J. F.; McFarland, A. R.
1984-01-01
In parallel with pursuing the goal of increased utilization of low-rank solid fuels, the US Department of Energy is investigating various aspects associated with the disposal of coal-combustion solid wastes. Concern has been expressed relative to the potential hazards presented by leachates from fly ash, bottom ash and scrubber wastes. This is of particular interest in some regions where disposal areas overlap aquifer recharge regions. The western regions of the United States are characterized by relatively dry alkaline soils which may effect substantial attenuation of contaminants in the leachates thereby reducing the pollution potential. A project has been initiated to study the contaminant uptake of western soils. This effort consists of two phases: (1) preparation of a state-of-the-art document on soil attenuation; and (2) laboratory experimental studies to characterize attenuation of a western soil. The state-of-the-art document, represented herein, presents the results of studies on the characteristics of selected wastes, reviews the suggested models which account for the uptake, discusses the specialized columnar laboratory studies on the interaction of leachates and soils, and gives an overview of characteristics of Texas and Wyoming soils. 116 references, 10 figures, 29 tables.
Bio-liquefaction/solubilization of low-rank Turkish lignites and characterization of the products
Energy Technology Data Exchange (ETDEWEB)
Yesim Basaran; Adil Denizli; Billur Sakintuna; Alpay Taralp; Yuda Yurum [Hacettepe University, Ankara (Turkey). Department of Environmental Sciences
2003-08-01
The effect of some white-rot fungi on the bio-liquefaction/solubilization of two low-rank Turkish coals and the chemical composition of the liquid products and the microbial mechanisms of coal conversion were investigated. Turkish Elbistan and Beypazari lignites were used in this study. The white-rot fungi received from various laboratories used in the bio-liquefaction/solubilization of the lignites were Pleurotus sajor-caju, Pleurotus sapidus, Pleurotus florida, Pleurotus ostreatus, Phanerochaete chrysosporium, and Coriolus versicolor. FT-IR spectra of raw and treated coal samples were measured, and bio-liquefied/solubilized coal samples were investigated by FT-IR and LC-MS techniques. The Coriolus versicolor fungus was determined to be most effective in bio-liquefying/solubilizing nitric acid-treated Elbistan lignite. In contrast, raw and nitric acid-treated Beypazari lignite seemed to be unaffected by the action of any kind of white-rot fungi. The liquid chromatogram of the water-soluble bio-liquefied/solubilized product contained four major peaks. Corresponding mass spectra of each peak indicated the presence of very complicated structures. 17 refs., 9 figs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Jain, M.K.; Narayan, R.
1993-08-05
Coal solubilization under aerobic conditions results in oxygenated coal product which, in turn, makes the coal poorer fuel than the starting material. A novel approach has been made in this project is to remove oxygen from coal by reductive decarboxylation. In Wyodak subbituminous coal the major oxygen functionality is carboxylic groups which exist predominantly as carboxylate anions strongly chelating metal cations like Ca{sup 2+} and forming strong macromolecular crosslinks which contribute in large measure to network polymer structure. Removal of the carboxylic groups at ambient temperature by anaerobic organisms would unravel the macromoleculer network, resulting in smaller coal macromolecules with increased H/C ratio which has better fuel value and better processing prospects. These studies described here sought to find biological methods to remove carboxylic functionalities from low rank coals under ambient conditions and to assess the properties of these modified coals towards coal liquefaction. Efforts were made to establish anaerobic microbial consortia having decarboxylating ability, decarboxylate coal with the adapted microbial consortia, isolate the organisms, and characterize the biotreated coal products. Production of CO{sup 2} was used as the primary indicator for possible coal decarboxylation.
Accelerated cardiac cine MRI using locally low rank and finite difference constraints.
Miao, Xin; Lingala, Sajan Goud; Guo, Yi; Jao, Terrence; Usman, Muhammad; Prieto, Claudia; Nayak, Krishna S
2016-07-01
To evaluate the potential value of combining multiple constraints for highly accelerated cardiac cine MRI. A locally low rank (LLR) constraint and a temporal finite difference (FD) constraint were combined to reconstruct cardiac cine data from highly undersampled measurements. Retrospectively undersampled 2D Cartesian reconstructions were quantitatively evaluated against fully-sampled data using normalized root mean square error, structural similarity index (SSIM) and high frequency error norm (HFEN). This method was also applied to 2D golden-angle radial real-time imaging to facilitate single breath-hold whole-heart cine (12 short-axis slices, 9-13s single breath hold). Reconstruction was compared against state-of-the-art constrained reconstruction methods: LLR, FD, and k-t SLR. At 10 to 60 spokes/frame, LLR+FD better preserved fine structures and depicted myocardial motion with reduced spatio-temporal blurring in comparison to existing methods. LLR yielded higher SSIM ranking than FD; FD had higher HFEN ranking than LLR. LLR+FD combined the complimentary advantages of the two, and ranked the highest in all metrics for all retrospective undersampled cases. Single breath-hold multi-slice cardiac cine with prospective undersampling was enabled with in-plane spatio-temporal resolutions of 2×2mm(2) and 40ms. Highly accelerated cardiac cine is enabled by the combination of 2D undersampling and the synergistic use of LLR and FD constraints. Copyright © 2016 Elsevier Inc. All rights reserved.
Transformation of low rank coal by Phanerochaete chrysosporium and other wood-rot fungi
Energy Technology Data Exchange (ETDEWEB)
Ralph, J.P.; Catcheside, D.E.A. [Flinders University of South Australia, Bedford Park, SA (Australia). School of Biological Sciences
1997-11-01
There is evidence that the organic fraction of low rank coal (LRC) is chemically transformed by wood-rot fungi. These fungi and the oxidases they secrete have variously been shown to solubilise, polymerise, depolymerise and decolourise macromolecules derived from LRC. The white-rot fungus, Phanerochaete chrysosporium, is able to depolymerise and decolourise alkali-soluble acid-precipitable LRC macromolecules (AS-coal), converting them to a form not recoverable by alkali washing. Transformation of AS-coal is enhanced in N-limiting media under hyperbaric oxygen and is believed to be due, at least in part, to oxidation by manganese peroxidase (MnP) and lignin peroxidase (LiP). The precise role these enzymes play is not yet clear but enzyme and mutant studies show AS-coal can be both polymerised and depolymerised by MnP and its mimetic Mn(III), without change to its absorbance at 400 nm. LiP decolourises AS-coal without apparently altering its molecular mass. Culture filtrates containing MnP and LiP acting on methylated AS-coal yield an array of low molecular mass moieties. Coal-derived monometers have not been recovered from cultures of P. chrysosporium, consistent with them being taken-up by the fungal cells. This suggests that cellular transformations may permit the diverse catabolic products derived from LRC to be converted to specific low molecular mass compounds in usable yield. 43 refs., 2 figs.
Indonesian low rank coal oxidation: The effect of H2O2 concentration and oxidation temperature
Rahayu, S. S.; Findiati, F.; Aprilia, F.
2016-11-01
Extraction of Indonesian low rank coals by alkaline solution has been performed to isolate the humic substances. Pretreatments of the coals by oxidation using H2O2 prior to extraction are required to have higher yield of humic substances. In the previous research, only the extraction process was considered. Therefore, the effects of reaction temperature and residence time on coal oxidation and composition of extract residues are also investigated in this research. The oxidation temperatures studied were 40°C, 50°C, and 70°C and the H2O2 concentrations studied were 5%, 15%, 20 %, and 30 %. All the oxidation variables were studied for 90 minutes. The results show that the higher the concentration of H2O2 used, the less oxidized coal produced. The same trend was obtained by using higher oxidation temperature. The effect of H2O2 concentration, oxidation temperature and reaction time to the yield of humic substances extraction have positive trends.
Nonlocal image restoration with bilateral variance estimation: a low-rank approach.
Dong, Weisheng; Shi, Guangming; Li, Xin
2013-02-01
Simultaneous sparse coding (SSC) or nonlocal image representation has shown great potential in various low-level vision tasks, leading to several state-of-the-art image restoration techniques, including BM3D and LSSC. However, it still lacks a physically plausible explanation about why SSC is a better model than conventional sparse coding for the class of natural images. Meanwhile, the problem of sparsity optimization, especially when tangled with dictionary learning, is computationally difficult to solve. In this paper, we take a low-rank approach toward SSC and provide a conceptually simple interpretation from a bilateral variance estimation perspective, namely that singular-value decomposition of similar packed patches can be viewed as pooling both local and nonlocal information for estimating signal variances. Such perspective inspires us to develop a new class of image restoration algorithms called spatially adaptive iterative singular-value thresholding (SAIST). For noise data, SAIST generalizes the celebrated BayesShrink from local to nonlocal models; for incomplete data, SAIST extends previous deterministic annealing-based solution to sparsity optimization through incorporating the idea of dictionary learning. In addition to conceptual simplicity and computational efficiency, SAIST has achieved highly competent (often better) objective performance compared to several state-of-the-art methods in image denoising and completion experiments. Our subjective quality results compare favorably with those obtained by existing techniques, especially at high noise levels and with a large amount of missing data.
Motion adaptive patch-based low-rank approach for compressed sensing cardiac cine MRI.
Yoon, Huisu; Kim, Kyung Sang; Kim, Daniel; Bresler, Yoram; Ye, Jong Chul
2014-11-01
One of the technical challenges in cine magnetic resonance imaging (MRI) is to reduce the acquisition time to enable the high spatio-temporal resolution imaging of a cardiac volume within a short scan time. Recently, compressed sensing approaches have been investigated extensively for highly accelerated cine MRI by exploiting transform domain sparsity using linear transforms such as wavelets, and Fourier. However, in cardiac cine imaging, the cardiac volume changes significantly between frames, and there often exist abrupt pixel value changes along time. In order to effectively sparsify such temporal variations, it is necessary to exploit temporal redundancy along motion trajectories. This paper introduces a novel patch-based reconstruction method to exploit geometric similarities in the spatio-temporal domain. In particular, we use a low rank constraint for similar patches along motion, based on the observation that rank structures are relatively less sensitive to global intensity changes, but make it easier to capture moving edges. A Nash equilibrium formulation with relaxation is employed to guarantee convergence. Experimental results show that the proposed algorithm clearly reconstructs important anatomical structures in cardiac cine image and provides improved image quality compared to existing state-of-the-art methods such as k-t FOCUSS, k-t SLR, and MASTeR.
PCLR: phase-constrained low-rank model for compressive diffusion-weighted MRI.
Gao, Hao; Li, Longchuan; Zhang, Kai; Zhou, Weifeng; Hu, Xiaoping
2014-11-01
This work develops a compressive sensing approach for diffusion-weighted (DW) MRI. A phase-constrained low-rank (PCLR) approach was developed using the image coherence across the DW directions for efficient compressive DW MRI, while accounting for drastic phase changes across the DW directions, possibly as a result of eddy current, and rigid and nonrigid motions. In PCLR, a low-resolution phase estimation was used for removing phase inconsistency between DW directions. In our implementation, GRAPPA (generalized autocalibrating partial parallel acquisition) was incorporated for better phase estimation while allowing higher undersampling factor. An efficient and easy-to-implement image reconstruction algorithm, consisting mainly of partial Fourier update and singular value decomposition, was developed for solving PCLR. The error measures based on diffusion-tensor-derived metrics and tractography indicated that PCLR, with its joint reconstruction of all DW images using the image coherence, outperformed the frame-independent reconstruction through zero-padding FFT. Furthermore, using GRAPPA for phase estimation, PCLR readily achieved a four-fold undersampling. The PCLR is developed and demonstrated for compressive DW MRI. A four-fold reduction in k-space sampling could be readily achieved without substantial degradation of reconstructed images and diffusion tensor measures, making it possible to significantly reduce the data acquisition in DW MRI and/or improve spatial and angular resolutions. Copyright © 2013 Wiley Periodicals, Inc.
Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal
Energy Technology Data Exchange (ETDEWEB)
Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri, John; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Liber, Pawel; Lopez-Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh
2012-03-30
The purpose of this project was to evaluate the ability of advanced low rank coal gasification technology to cause a significant reduction in the COE for IGCC power plants with 90% carbon capture and sequestration compared with the COE for similarly configured IGCC plants using conventional low rank coal gasification technology. GE’s advanced low rank coal gasification technology uses the Posimetric Feed System, a new dry coal feed system based on GE’s proprietary Posimetric Feeder. In order to demonstrate the performance and economic benefits of the Posimetric Feeder in lowering the cost of low rank coal-fired IGCC power with carbon capture, two case studies were completed. In the Base Case, the gasifier was fed a dilute slurry of Montana Rosebud PRB coal using GE’s conventional slurry feed system. In the Advanced Technology Case, the slurry feed system was replaced with the Posimetric Feed system. The process configurations of both cases were kept the same, to the extent possible, in order to highlight the benefit of substituting the Posimetric Feed System for the slurry feed system.
Advanced CO{sub 2} Capture Technology for Low Rank Coal IGCC System
Energy Technology Data Exchange (ETDEWEB)
Alptekin, Gokhan
2013-09-30
The overall objective of the project is to demonstrate the technical and economic viability of a new Integrated Gasification Combined Cycle (IGCC) power plant designed to efficiently process low rank coals. The plant uses an integrated CO{sub 2} scrubber/Water Gas Shift (WGS) catalyst to capture over90 percent capture of the CO{sub 2} emissions, while providing a significantly lower cost of electricity (COE) than a similar plant with conventional cold gas cleanup system based on SelexolTM technology and 90 percent carbon capture. TDA’s system uses a high temperature physical adsorbent capable of removing CO{sub 2} above the dew point of the synthesis gas and a commercial WGS catalyst that can effectively convert CO in The overall objective of the project is to demonstrate the technical and economic viability of a new Integrated Gasification Combined Cycle (IGCC) power plant designed to efficiently process low rank coals. The plant uses an integrated CO{sub 2} scrubber/Water Gas Shift (WGS) catalyst to capture over90 percent capture of the CO{sub 2} emissions, while providing a significantly lower cost of electricity (COE) than a similar plant with conventional cold gas cleanup system based on SelexolTM technology and 90 percent carbon capture. TDA’s system uses a high temperature physical adsorbent capable of removing CO{sub 2} above the dew point of the synthesis gas and a commercial WGS catalyst that can effectively convert CO in bituminous coal the net plant efficiency is about 2.4 percentage points higher than an Integrated Gasification Combined Cycle (IGCC) plant equipped with SelexolTM to capture CO{sub 2}. We also previously completed two successful field demonstrations: one at the National Carbon Capture Center (Southern- Wilsonville, AL) in 2011, and a second demonstration in fall of 2012 at the Wabash River IGCC plant (Terra Haute, IN). In this project, we first optimized the sorbent to catalyst ratio used in the combined WGS and CO{sub 2} capture
Energy Technology Data Exchange (ETDEWEB)
1989-12-31
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior; Temi Linjewile
2003-10-31
This is the third Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Argillon GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, the second set of mercury measurements was made after the catalysts had been exposed to flue gas for about 2,000 hours. There was good agreement between the Ontario Hydro measurements and the SCEM measurements. Carbon trap measurements of total mercury agreed fairly well with the SCEM. There did appear to be some loss of mercury in the sampling system toward the end of the sampling campaign. NO{sub x} reductions across the catalysts ranged from 60% to 88%. Loss of total mercury across the commercial catalysts was not observed, as it had been in the March/April test series. It is not clear whether this was due to aging of the catalyst or to changes in the sampling system made between March/April and August. In the presence of ammonia, the blank monolith showed no oxidation. Two of the commercial catalysts showed mercury oxidation that was comparable to that in the March/April series. The other three commercial catalysts showed a decrease in mercury oxidation relative to the March/April series. Oxidation of mercury increased without ammonia present. Transient experiments showed that when ammonia was turned on, mercury appeared to desorb from the catalyst, suggesting displacement of adsorbed mercury by the ammonia.
Co-pyrolysis of low rank coals and biomass: Product distributions
Energy Technology Data Exchange (ETDEWEB)
Soncini, Ryan M.; Means, Nicholas C.; Weiland, Nathan T.
2013-10-01
Pyrolysis and gasification of combined low rank coal and biomass feeds are the subject of much study in an effort to mitigate the production of green house gases from integrated gasification combined cycle (IGCC) systems. While co-feeding has the potential to reduce the net carbon footprint of commercial gasification operations, the effects of co-feeding on kinetics and product distributions requires study to ensure the success of this strategy. Southern yellow pine was pyrolyzed in a semi-batch type drop tube reactor with either Powder River Basin sub-bituminous coal or Mississippi lignite at several temperatures and feed ratios. Product gas composition of expected primary constituents (CO, CO{sub 2}, CH{sub 4}, H{sub 2}, H{sub 2}O, and C{sub 2}H{sub 4}) was determined by in-situ mass spectrometry while minor gaseous constituents were determined using a GC-MS. Product distributions are fit to linear functions of temperature, and quadratic functions of biomass fraction, for use in computational co-pyrolysis simulations. The results are shown to yield significant nonlinearities, particularly at higher temperatures and for lower ranked coals. The co-pyrolysis product distributions evolve more tar, and less char, CH{sub 4}, and C{sub 2}H{sub 4}, than an additive pyrolysis process would suggest. For lignite co-pyrolysis, CO and H{sub 2} production are also reduced. The data suggests that evolution of hydrogen from rapid pyrolysis of biomass prevents the crosslinking of fragmented aromatic structures during coal pyrolysis to produce tar, rather than secondary char and light gases. Finally, it is shown that, for the two coal types tested, co-pyrolysis synergies are more significant as coal rank decreases, likely because the initial structure in these coals contains larger pores and smaller clusters of aromatic structures which are more readily retained as tar in rapid co-pyrolysis.
Upgrading low-rank coals using the liquids from coal (LFC) process
Energy Technology Data Exchange (ETDEWEB)
Nickell, R.E.; Hoften, S.A. van
1993-12-31
Three unmistakable trends characterize national and international coal markets today that help to explain coal`s continuing and, in some cases, increasing share of the world`s energy mix: the downward trend in coal prices is primarily influenced by an excess of increasing supply relative to increasing demand. Associated with this trend are the availability of capital to expand coal supplies when prices become firm and the role of coal exports in international trade, especially for developing nations; the global trend toward reducing the transportation cost component relative to the market, preserves or enhances the producer`s profit margins in the face of lower prices. The strong influence of transportation costs is due to the geographic relationships between coal producers and coal users. The trend toward upgrading low grade coals, including subbituminous and lignite coals, that have favorable environmental characteristics, such as low sulfur, compensates in some measure for decreasing coal prices and helps to reduce transportation costs. The upgrading of low grade coal includes a variety of precombustion clean coal technologies, such as deep coal cleaning. Also included in this grouping are the coal drying and mild pyrolysis (or mild gasification) technologies that remove most of the moisture and a substantial portion of the volatile matter, including organic sulfur, while producing two or more saleable coproducts with considerable added value. SGI International`s Liquids From Coal (LFC) process falls into this category. In the following sections, the LFC process is described and the coproducts of the mild pyrolysis are characterized. Since the process can be applied widely to low rank coals all around the world, the characteristics of coproducts from three different regions around the Pacific Rim-the Powder River Basin of Wyoming, the Beluga Field in Alaska near the Cook Inlet, and the Bukit Asam region in south Sumatra, Indonesia - are compared.
Reactions between sodium and silicon minerals during gasification of low-rank coal
Energy Technology Data Exchange (ETDEWEB)
D.P. Ross; A. Kosminski; J.B. Agnew [University of Adelaide, Adelaide, SA (Australia). Cooperative Research Centre for Clean Power from Lignite, School of Chemical Engineering
2003-07-01
The main objective of this study was to elucidate the role of sodium and silicon minerals in formation of liquid phases during gasification of a high-sulphur low-rank Australian coal. The organically-bound sodium was found to be transformed into sodium carbonate, contrary to thermodynamic predictions of the formation of sodium sulphide. Up to half of the sodium was vaporised from the char. Volatilisation of sodium increased with temperature and time, and dependent on the gas environment. Sodium chloride present in coal either vaporised or partly reacted with the coal to form sodium carbonate. The release of sodium was disproportionate to that of chlorine. Steam was found, both theoretically and experimentally, to be the most important component of the gasification environment. Steam substantially reduced the melting temperature of sodium carbonate. Consequently, gasification with steam resulted in the formation, via a liquid-solid state reaction, of liquid silicates at temperatures as low as 750{degree}C. Sodium chloride and silica reacted only in steam and to form fused silicates at 750{degree}C, with the rate of silicate formation substantially slower than for reaction between silica and sodium carbonate. Formation of silicates around silica particles and fused silicate joints between individual silica grains inside the char were established to occur uniformly throughout the char particles. Experimental results showed that kaolin and organic bound sodium react upon reaching 650{degree}C to form a solid sodium aluminosilicate, with a melting point above 1250{degree}C. A similar reaction occurs with sodium chloride, but at a slower rate dependent on the temperature, time and gas atmosphere. Importantly, the reactions of sodium with kaolin prevented reactions with silica in forming liquid silicates.
A comparison between alkaline and decomplexing reagents to extract humic acids from low rank coals
Energy Technology Data Exchange (ETDEWEB)
Garcia, D.; Cegarra, J.; Abad, M. [CSIC, Madrid (Spain). Centro de Edafologia y Biologia Aplicada del Segura
1996-07-01
Humic acids (HAs) were obtained from two low rank coals (lignite and leonardite) by using either alkali extractants (0.1 M NaOH, 0.1 M KOH or 0.25 M KOH) or solutions containing Na{sub 4}P{sub 2}O{sub 7} (0.1 M Na{sub 4}P{sub 2}O{sub 7} or 0.1 M NaOH/Na{sub 4}P{sub 2}O{sub 7}). In both coals, the greatest yields were obtained with 0.25 M KOH and the lowest with the 0.1 M alkalis, whereas the extractions based on Na{sub 4}P{sub 2}O{sub 7} yielded intermediate values and were more effective on the lignite. Chemical analysis showed that the leonardite HAs consisted of molecules that were less oxidized and had fewer functional groups than the HAs released form the lignite. Moreover, the HAs extracted by reagents containing Na{sub 4}P{sub 2}O{sub 7} exhibited more functional groups than those extracted with alkali, this effect being more apparent in lignite because of its greater cation exchange capacity. Gel permeation chromatography indicated that the leonardite HAs contained a greater proportion of higher molecular size compounds than the lignite HAs, and that both solutions containing Na{sub 4}P{sub 2}O{sub 7} released HAs with a greater proportion of smaller molecular compounds from the lignite than did the alkali extractants. 16 refs., 3 figs., 2 tabs.
Physico-chemical phenomena during mechanical thermal expression of water in low rank coal
Energy Technology Data Exchange (ETDEWEB)
Alan L. Chaffee; Yuli Aranto; Christian Bergins; Janine Hulston; Marc Marshall; Haruo Kumagai [Monash University, Vic. (Australia). School of Chemistry
2007-07-01
Mechanical thermal expression (MTE) is a non-evaporative method for water removal from low rank coal with typical processing conditions in the range 150-220{sup o}C and 10-20 MPa of applied mechanical pressure. Using a range of analytical methods, this study probes physico-chemical changes in the coal structure that occur as a result of MTE processing and, also, molecular dynamic behaviour under MTE conditions. Mercury intrusion porosimetry (MIP), after appropriately compensating for the coal's compressibility, showed that progressively harsher MTE conditions led to a reduction in the concentration of macropores and a concomitant increase in the concentration of mesopores. However, since MIP requires the use of dried samples, it does not facilitate the examination of 'as-received' samples. Using small angle X-ray scattering (SAXS) it was possible to examine the MTE products in both their wet and dry states enabling the pore volume reduction upon drying to be observed. Also, consideration of the SAXS and MIP results in combination, suggests that the abundance of 'closed' (meso)pores is reduced at higher MTE processing temperatures. The dynamic nature of coal molecular structure under MTE processing conditions has been probed for the first time using {sup 1}H-NMR transverse relaxation rate (T2) measurements. The data suggest that water exerts a 'plasticising' effect, enhancing the mobility of the coal structure at elevated temperature. This enhanced mobility (softening) presumably facilitates the reorganization of molecular structure, enabling the changes in porosity identified by MIP and SAXS. 22 refs., 8 figs., 1 tab.
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior
2004-12-31
The objectives of this program were to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel and to develop a greater understanding of mercury oxidation across SCR catalysts in the form of a simple model. The Electric Power Research Institute (EPRI) and Argillon GmbH provided co-funding for this program. REI used a multicatalyst slipstream reactor to determine oxidation of mercury across five commercial SCR catalysts at a power plant that burned a blend of 87% subbituminous coal and 13% bituminous coal. The chlorine content of the blend was 100 to 240 {micro}g/g on a dry basis. Mercury measurements were carried out when the catalysts were relatively new, corresponding to about 300 hours of operation and again after 2,200 hours of operation. NO{sub x}, O{sub 2} and gaseous mercury speciation at the inlet and at the outlet of each catalyst chamber were measured. In general, the catalysts all appeared capable of achieving about 90% NO{sub x} reduction at a space velocity of 3,000 hr{sup -1} when new, which is typical of full-scale installations; after 2,200 hours exposure to flue gas, some of the catalysts appeared to lose NO{sub x} activity. For the fresh commercial catalysts, oxidation of mercury was in the range of 25% to 65% at typical full-scale space velocities. A blank monolith showed no oxidation of mercury under any conditions. All catalysts showed higher mercury oxidation without ammonia, consistent with full-scale measurements. After exposure to flue gas for 2,200 hours, some of the catalysts showed reduced levels of mercury oxidation relative to the initial levels of oxidation. A model of Hg oxidation across SCRs was formulated based on full-scale data. The model took into account the effects of temperature, space velocity, catalyst type and HCl concentration in the flue gas.
在线低秩表示的目标跟踪算法%Object tracking via online low rank representation
Institute of Scientific and Technical Information of China (English)
王海军; 葛红娟; 张圣燕
2016-01-01
针对传统的基于生成模式的跟踪方法对噪声及遮挡问题比较敏感，导致跟踪结果失败的问题，提出了以前几帧的跟踪结果作为观测矩阵，采用鲁棒的主元成分分析模型求解观测模型的低秩特征。当新的视频流到来时，不是把所有的跟踪结果矩阵作为观测矩阵。并提出了新的增量鲁棒的主元成分分析模型，采用增广拉格朗日算法求解新矩阵的低秩特征，并以此低秩矩阵在贝叶斯框架下建立跟踪模型，用恢复的低秩特征更新字典矩阵。将文中方法与其他6种跟踪算法在8种跟踪视频上进行跟踪对比。实验结果表明，所提出的方法具有较低的像素中心位置误差和较高的重叠率。%Object tracking is an active research topic in computer vision . The traditional tracking methods based on the generative model are sensitive to noise and occlusion , which leads to the failure of tracking results . In order to solve this problem , the tracking results of the first few frames are used as the observation matrix , and the low rank features of the observation model are solved by the the RPCA model . When the new video streams come , a new incremental RPCA is proposed to compute the new observation matrix by the augmented Lagrangian algorithm . The tracking model is established in the Bayesian framework , and the dictionary matrix is updated with the low rank feature . We have tested the proposed algorithm and six state‐of‐the‐art approaches on eight publicly available sequences . Experimental results show that the proposed method has a lower pixel center position error and a higher overlap ratio .
Ren, Xiang
2012-01-01
Transform Invariant Low-rank Textures (TILT) is a novel and powerful tool that can effectively rectify a rich class of low-rank textures in 3D scenes from 2D images despite significant deformation and corruption. The existing algorithm for solving TILT is based on the alternating direction method (ADM). It suffers from high computational cost and is not theoretically guaranteed to converge to a correct solution. In this paper, we propose a novel algorithm to speed up solving TILT, with guaranteed convergence. Our method is based on the recently proposed linearized alternating direction method with adaptive penalty (LADMAP). To further reduce computation, warm starts are also introduced to initialize the variables better and cut the cost on singular value decomposition. Extensive experimental results on both synthetic and real data demonstrate that this new algorithm works much more efficiently and robustly than the existing algorithm. It could be at least five times faster than the previous method.
Leclerc, Arnaud; Carrington, Tucker
2016-01-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analyzed and the numerical results are compared with those obtained with the reduced rank block power method introduced in J. Chem. Phys. 140, 174111 (2014). Relative merits of the different algorithms are presented, showing that the advantage o...
Directory of Open Access Journals (Sweden)
Chuan-yun LI
2011-12-01
Full Text Available Objective The present study investigates the influence of professional stress and social support on professional burnout among low-rank army officers.Methods The professional stress,social support,and professional burnout scales among low-rank army officers were used as test tools.Moreover,the officers of established units(battalion,company,and platoon were chosen as test subjects.Out of the 260 scales sent,226 effective scales were received.The descriptive statistic and canonical correlation analysis models were used to analyze the influence of each variable.Results The scores of low-rank army officers in the professional stress,social support,and professional burnout scales were more than average,except on two factors,namely,interpersonal support and de-individualization.The canonical analysis identified three groups of canonical correlation factors,of which two were up to a significant level(P < 0.001.After further eliminating the social support variable,the canonical correlation analysis of professional stress and burnout showed that the canonical correlation coefficients P corresponding to 1 and 2 were 0.62 and 0.36,respectively,and were up to a very significant level(P < 0.001.Conclusion The low-rank army officers experience higher professional stress and burnout levels,showing a lower sense of accomplishment,emotional exhaustion,and more serious depersonalization.However,social support can reduce the onset and seriousness of professional burnout among these officers by lessening pressure factors,such as career development,work features,salary conditions,and other personal factors.
Lin, Lin; Huhs, Georg; Yang, Chao
2014-01-01
We describe a scheme for efficient large-scale electronic-structure calculations based on the combination of the pole expansion and selected inversion (PEXSI) technique with the SIESTA method, which uses numerical atomic orbitals within the Kohn-Sham density functional theory (KSDFT) framework. The PEXSI technique can efficiently utilize the sparsity pattern of the Hamiltonian and overlap matrices generated in SIESTA, and for large systems has a much lower computational complexity than that associated with the matrix diagonalization procedure. The PEXSI technique can be used to evaluate the electron density, free energy, atomic forces, density of states and local density of states without computing any eigenvalue or eigenvector of the Kohn-Sham Hamiltonian. It can achieve accuracy fully comparable to that obtained from a matrix diagonalization procedure for general systems, including metallic systems at low temperature. The PEXSI method is also highly scalable. With the recently developed massively parallel P...
Ambikasaran, Sivaram
2015-01-01
Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...
Liquid CO{sub 2}/Coal Slurry for Feeding Low Rank Coal to Gasifiers
Energy Technology Data Exchange (ETDEWEB)
Marasigan, Jose; Goldstein, Harvey; Dooher, John
2013-09-30
This study investigates the practicality of using a liquid CO{sub 2}/coal slurry preparation and feed system for the E-Gas™ gasifier in an integrated gasification combined cycle (IGCC) electric power generation plant configuration. Liquid CO{sub 2} has several property differences from water that make it attractive for the coal slurries used in coal gasification-based power plants. First, the viscosity of liquid CO{sub 2} is much lower than water. This means it should take less energy to pump liquid CO{sub 2} through a pipe compared to water. This also means that a higher solids concentration can be fed to the gasifier, which should decrease the heat requirement needed to vaporize the slurry. Second, the heat of vaporization of liquid CO{sub 2} is about 80% lower than water. This means that less heat from the gasification reactions is needed to vaporize the slurry. This should result in less oxygen needed to achieve a given gasifier temperature. And third, the surface tension of liquid CO{sub 2} is about 2 orders of magnitude lower than water, which should result in finer atomization of the liquid CO{sub 2} slurry, faster reaction times between the oxygen and coal particles, and better carbon conversion at the same gasifier temperature. EPRI and others have recognized the potential that liquid CO{sub 2} has in improving the performance of an IGCC plant and have previously conducted systemslevel analyses to evaluate this concept. These past studies have shown that a significant increase in IGCC performance can be achieved with liquid CO{sub 2} over water with certain gasifiers. Although these previous analyses had produced some positive results, they were still based on various assumptions for liquid CO{sub 2}/coal slurry properties. This low-rank coal study extends the existing knowledge base to evaluate the liquid CO{sub 2}/coal slurry concept on an E-Gas™-based IGCC plant with full 90% CO{sub 2} capture. The overall objective is to determine if this
Liquid CO{sub 2}/Coal Slurry for Feeding Low Rank Coal to Gasifiers
Energy Technology Data Exchange (ETDEWEB)
Marasigan, Jose; Goldstein, Harvey; Dooher, John
2013-09-30
This study investigates the practicality of using a liquid CO{sub 2}/coal slurry preparation and feed system for the E-Gas™ gasifier in an integrated gasification combined cycle (IGCC) electric power generation plant configuration. Liquid CO{sub 2} has several property differences from water that make it attractive for the coal slurries used in coal gasification-based power plants. First, the viscosity of liquid CO{sub 2} is much lower than water. This means it should take less energy to pump liquid CO{sub 2} through a pipe compared to water. This also means that a higher solids concentration can be fed to the gasifier, which should decrease the heat requirement needed to vaporize the slurry. Second, the heat of vaporization of liquid CO{sub 2} is about 80% lower than water. This means that less heat from the gasification reactions is needed to vaporize the slurry. This should result in less oxygen needed to achieve a given gasifier temperature. And third, the surface tension of liquid CO{sub 2} is about 2 orders of magnitude lower than water, which should result in finer atomization of the liquid CO{sub 2} slurry, faster reaction times between the oxygen and coal particles, and better carbon conversion at the same gasifier temperature. EPRI and others have recognized the potential that liquid CO{sub 2} has in improving the performance of an IGCC plant and have previously conducted systemslevel analyses to evaluate this concept. These past studies have shown that a significant increase in IGCC performance can be achieved with liquid CO{sub 2} over water with certain gasifiers. Although these previous analyses had produced some positive results, they were still based on various assumptions for liquid CO{sub 2}/coal slurry properties. This low-rank coal study extends the existing knowledge base to evaluate the liquid CO{sub 2}/coal slurry concept on an E-Gas™-based IGCC plant with full 90% CO{sub 2} capture. The overall objective is to determine if this
Geogenic organic contaminants in the low-rank coal-bearing Carrizo-Wilcox aquifer of East Texas, USA
Chakraborty, Jayeeta; Varonka, Matthew; Orem, William; Finkelman, Robert B.; Manton, William
2017-06-01
The organic composition of groundwater along the Carrizo-Wilcox aquifer in East Texas (USA), sampled from rural wells in May and September 2015, was examined as part of a larger study of the potential health and environmental effects of organic compounds derived from low-rank coals. The quality of water from the low-rank coal-bearing Carrizo-Wilcox aquifer is a potential environmental concern and no detailed studies of the organic compounds in this aquifer have been published. Organic compounds identified in the water samples included: aliphatics and their fatty acid derivatives, phenols, biphenyls, N-, O-, and S-containing heterocyclic compounds, polycyclic aromatic hydrocarbons (PAHs), aromatic amines, and phthalates. Many of the identified organic compounds (aliphatics, phenols, heterocyclic compounds, PAHs) are geogenic and originated from groundwater leaching of young and unmetamorphosed low-rank coals. Estimated concentrations of individual compounds ranged from about 3.9 to 0.01 μg/L. In many rural areas in East Texas, coal strata provide aquifers for drinking water wells. Organic compounds observed in groundwater are likely to be present in drinking water supplied from wells that penetrate the coal. Some of the organic compounds identified in the water samples are potentially toxic to humans, but at the estimated levels in these samples, the compounds are unlikely to cause acute health problems. The human health effects of low-level chronic exposure to coal-derived organic compounds in drinking water in East Texas are currently unknown, and continuing studies will evaluate possible toxicity.
Geogenic organic contaminants in the low-rank coal-bearing Carrizo-Wilcox aquifer of East Texas, USA
Chakraborty, Jayeeta; Varonka, Matthew S.; Orem, William H.; Finkelman, Robert B.; Manton, William
2017-01-01
The organic composition of groundwater along the Carrizo-Wilcox aquifer in East Texas (USA), sampled from rural wells in May and September 2015, was examined as part of a larger study of the potential health and environmental effects of organic compounds derived from low-rank coals. The quality of water from the low-rank coal-bearing Carrizo-Wilcox aquifer is a potential environmental concern and no detailed studies of the organic compounds in this aquifer have been published. Organic compounds identified in the water samples included: aliphatics and their fatty acid derivatives, phenols, biphenyls, N-, O-, and S-containing heterocyclic compounds, polycyclic aromatic hydrocarbons (PAHs), aromatic amines, and phthalates. Many of the identified organic compounds (aliphatics, phenols, heterocyclic compounds, PAHs) are geogenic and originated from groundwater leaching of young and unmetamorphosed low-rank coals. Estimated concentrations of individual compounds ranged from about 3.9 to 0.01 μg/L. In many rural areas in East Texas, coal strata provide aquifers for drinking water wells. Organic compounds observed in groundwater are likely to be present in drinking water supplied from wells that penetrate the coal. Some of the organic compounds identified in the water samples are potentially toxic to humans, but at the estimated levels in these samples, the compounds are unlikely to cause acute health problems. The human health effects of low-level chronic exposure to coal-derived organic compounds in drinking water in East Texas are currently unknown, and continuing studies will evaluate possible toxicity.
Geogenic organic contaminants in the low-rank coal-bearing Carrizo-Wilcox aquifer of East Texas, USA
Chakraborty, Jayeeta; Varonka, Matthew; Orem, William; Finkelman, Robert B.; Manton, William
2017-01-01
The organic composition of groundwater along the Carrizo-Wilcox aquifer in East Texas (USA), sampled from rural wells in May and September 2015, was examined as part of a larger study of the potential health and environmental effects of organic compounds derived from low-rank coals. The quality of water from the low-rank coal-bearing Carrizo-Wilcox aquifer is a potential environmental concern and no detailed studies of the organic compounds in this aquifer have been published. Organic compounds identified in the water samples included: aliphatics and their fatty acid derivatives, phenols, biphenyls, N-, O-, and S-containing heterocyclic compounds, polycyclic aromatic hydrocarbons (PAHs), aromatic amines, and phthalates. Many of the identified organic compounds (aliphatics, phenols, heterocyclic compounds, PAHs) are geogenic and originated from groundwater leaching of young and unmetamorphosed low-rank coals. Estimated concentrations of individual compounds ranged from about 3.9 to 0.01 μg/L. In many rural areas in East Texas, coal strata provide aquifers for drinking water wells. Organic compounds observed in groundwater are likely to be present in drinking water supplied from wells that penetrate the coal. Some of the organic compounds identified in the water samples are potentially toxic to humans, but at the estimated levels in these samples, the compounds are unlikely to cause acute health problems. The human health effects of low-level chronic exposure to coal-derived organic compounds in drinking water in East Texas are currently unknown, and continuing studies will evaluate possible toxicity.
Reweighted Low-Rank Tensor Decomposition based on t-SVD and its Applications in Video Denoising
Baburaj, M.; George, Sudhish N.
2016-01-01
The t-SVD based Tensor Robust Principal Component Analysis (TRPCA) decomposes low rank multi-linear signal corrupted by gross errors into low multi-rank and sparse component by simultaneously minimizing tensor nuclear norm and l 1 norm. But if the multi-rank of the signal is considerably large and/or large amount of noise is present, the performance of TRPCA deteriorates. To overcome this problem, this paper proposes a new efficient iterative reweighted tensor decomposition scheme based on t-...
A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements
Chávez, Gustavo
2017-03-17
A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.
Directory of Open Access Journals (Sweden)
Xiaoxia Yin
Full Text Available In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate, selectivity (false positive rate and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature.
Zu, Baokai; Xia, Kewen; Pan, Yongke; Niu, Wenjia
2017-01-01
Semisupervised Discriminant Analysis (SDA) is a semisupervised dimensionality reduction algorithm, which can easily resolve the out-of-sample problem. Relative works usually focus on the geometric relationships of data points, which are not obvious, to enhance the performance of SDA. Different from these relative works, the regularized graph construction is researched here, which is important in the graph-based semisupervised learning methods. In this paper, we propose a novel graph for Semisupervised Discriminant Analysis, which is called combined low-rank and k-nearest neighbor (LRKNN) graph. In our LRKNN graph, we map the data to the LR feature space and then the kNN is adopted to satisfy the algorithmic requirements of SDA. Since the low-rank representation can capture the global structure and the k-nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, the LRKNN graph can significantly improve the performance of SDA. Extensive experiments on several real-world databases show that the proposed LRKNN graph is an efficient graph constructor, which can largely outperform other commonly used baselines.
Directory of Open Access Journals (Sweden)
Baokai Zu
2017-01-01
Full Text Available Semisupervised Discriminant Analysis (SDA is a semisupervised dimensionality reduction algorithm, which can easily resolve the out-of-sample problem. Relative works usually focus on the geometric relationships of data points, which are not obvious, to enhance the performance of SDA. Different from these relative works, the regularized graph construction is researched here, which is important in the graph-based semisupervised learning methods. In this paper, we propose a novel graph for Semisupervised Discriminant Analysis, which is called combined low-rank and k-nearest neighbor (LRKNN graph. In our LRKNN graph, we map the data to the LR feature space and then the kNN is adopted to satisfy the algorithmic requirements of SDA. Since the low-rank representation can capture the global structure and the k-nearest neighbor algorithm can maximally preserve the local geometrical structure of the data, the LRKNN graph can significantly improve the performance of SDA. Extensive experiments on several real-world databases show that the proposed LRKNN graph is an efficient graph constructor, which can largely outperform other commonly used baselines.
Directory of Open Access Journals (Sweden)
Fajri Vidian
2017-03-01
Full Text Available The solid fuel must be converted to gas fuel or liquid fuel for application to internal combustion engine or gas turbine. Gasification is a technology to convert solid fuel into combustible gas. Gasification system generally consists of a gasifier, cyclone, spray tower and filter. This study is purposed to design, construction, and experiment of gasification system. The imbert downdraft gasifier was designed with 42 kg/h for the maximum capacity of fuel consumption, 90 cm for height, 26.8 cm for main diameter and 12 cm for throat diameter. The gasifier was constructed from stainless steel material of SUS 304. Biomass and low rank coal from South Sumatera, Indonesia was used as fuel. The result of the experiment showed that combustible gas was produced after 15 minutes operation in average. The air fuel ratio of low rank coal was 1.7 which was higher than biomass (1.1. Combustible gas stopped producing when the fuel went down below the throat of gasifier
Lewis, Cannada A; Valeev, Edward F
2015-01-01
Clustered Low Rank (CLR) framework for block-sparse and block-low-rank tensor representation and computation is described. The CLR framework depends on 2 parameters that control precision: one controlling the CLR block rank truncation and another that controls screening of small contributions in arithmetic operations on CLR tensors. As these parameters approach zero CLR representation and arithmetic become exact. There are no other ad-hoc heuristics, such as domains. Use of the CLR format for the order-2 and order-3 tensors that appear in the context of density fitting (DF) evaluation of the Hartree-Fock (exact) exchange significantly reduced the storage and computational complexities below their standard $\\mathcal{O}(N^3)$ and $\\mathcal{O}(N^4)$ figures. Even for relatively small systems and realistic basis sets CLR-based DF HF becomes more efficient than the standard DF approach, and significantly more efficient than the conventional non-DF HF, while negligibly affecting molecular energies and properties.
Development and application of an efficient gas extraction model for low-rank high-gas coal beds
Institute of Scientific and Technical Information of China (English)
Baiquan Lin; He Li; Desheng Yuan; Ziwen Li
2015-01-01
To promote gas extraction in low-rank high-gas coal beds, the pore structure characteristics of the coal and their effect on gas desorption were studied. The results show that micropores are relatively rare in low-rank coal;mesopores are usually semi-open and inkpot-shaped whereas macropores are usually slit-shaped. Gas desorption is relatively easy at high-pressure stages, whereas it is difficult at low-pressure stages because of the‘bottleneck effect’ of the semi-open inkpot-shaped mesopores. A ‘two-three-two’ gas extraction model was established following experimental analysis and engi-neering practice applied in the Binchang mining area. In this model, gas extraction is divided into three periods:a planning period, a transitional period and a production period. In each period, surface extraction and underground extraction are performed simultaneously, and pressure-relief extraction and conventional extraction are coupled to each other. After applying this model, the gas extraction rate rose to 78.8%.
Energy Technology Data Exchange (ETDEWEB)
Sugiyama, T. [Center for Coal Utilization, Japan, Tokyo (Japan); Tsurui, M.; Suto, Y.; Asakura, M. [JGC Corp., Tokyo (Japan); Ogawa, J.; Yui, M.; Takano, S. [Japan COM Co. Ltd., Japan, Tokyo (Japan)
1996-09-01
A CWM manufacturing technology was developed by means of upgrading low rank coals. Even though some low rank coals have such advantages as low ash, low sulfur and high volatile matter content, many of them are merely used on a small scale in areas near the mine-mouths because of high moisture content, low calorification and high ignitability. Therefore, discussions were given on a coal fuel manufacturing technology by which coal will be irreversibly dehydrated with as much volatile matters as possible remaining in the coal, and the coal is made high-concentration CWM, thus the coal can be safely transported and stored. The technology uses a method to treat coal with hot water under high pressure and dry it with hot water. The method performs not only removal of water, but also irreversible dehydration without losing volatile matters by decomposing hydrophilic groups on surface and blocking micro pores with volatile matters in the coal (wax and tar). The upgrading effect was verified by processing coals in a pilot plant, which derived greater calorification and higher concentration CWM than with the conventional processes. A CWM combustion test proved lower NOx, lower SOx and higher combustion rate than for bituminous coal. The ash content was also found lower. This process suits a Texaco-type gasification furnace. For a production scale of three million tons a year, the production cost is lower by 2 yen per 10 {sup 3} kcal than for heavy oil with the same sulfur content. 11 figs., 15 tabs.
Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L
2010-09-15
The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase.
Low-temperature co-pyrolysis of a low-rank coal and biomass to prepare smokeless fuel briquettes
Energy Technology Data Exchange (ETDEWEB)
Blesa, M.J.; Miranda, J.L.; Moliner, R.; Izquierdo, M.T. [Instituto de Carboquimica CSIC, P.O. Box 589, 50080 Zaragoza (Spain); Palacios, J.M. [Instituto de Catalisis y Petroleoquimica CSIC, Cantoblanco, 28049 Madrid (Spain)
2003-12-01
Smokeless fuel briquettes have been prepared with low-rank coal and biomass. These raw materials have been mixed in different ratios and have been pyrolysed at 600C with the aim to reduce both the volatile matter and the sulphur content, and to increase the high calorific value (HCV). The co-pyrolysis of coal and biomass has shown a synergetic effect. The biomass favours the release of hydrogen sulphide during the thermal treatment. This fact can be explained in terms of the hydrogen-donor character of the biomass. Moreover, the optimisation of the amount of binder and the influence of different types of biomass in the blend have been studied with respect to the mechanical properties of the briquettes (impact resistance, compression strength and abrasion). Briquettes prepared with sawdust (S) present better mechanical properties than those with olive stones (O) because of its fibrous texture.
Large-scale Nyström kernel matrix approximation using randomized SVD.
Li, Mu; Bi, Wei; Kwok, James T; Lu, Bao-Liang
2015-01-01
The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, to ensure an accurate approximation, a sufficient number of columns have to be sampled. On very large data sets, the singular value decomposition (SVD) step on the resultant data submatrix can quickly dominate the computations and become prohibitive. In this paper, we propose an accurate and scalable Nyström scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix using the recent randomized low-rank matrix approximation algorithms. Theoretical analysis shows that the proposed algorithm is as accurate as the standard Nyström method that directly performs a large SVD on the inner submatrix. On the other hand, its time complexity is only as low as performing a small SVD. Encouraging results are obtained on a number of large-scale data sets for low-rank approximation. Moreover, as the most computational expensive steps can be easily distributed and there is minimal data transfer among the processors, significant speedup can be further obtained with the use of multiprocessor and multi-GPU systems.
Energy Technology Data Exchange (ETDEWEB)
Takarada, Y.; Kato, K.; Kuroda, M.; Nakagawa, N. [Gunma University, Gunma (Japan). Faculty of Engineering; Roman, M. [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)
1997-02-01
Experiment reveals the characteristics of low rank coal serving as a desulfurizing material in fluidized coal bed reactor with oxygen-containing functional groups exchanged with Ca ions. This effort aims at identifying inexpensive Ca materials and determining the desulfurizing characteristics of Ca-carrying brown coal. A slurry of cement sludge serving as a Ca source and low rank coal is agitated for the exchange of functional groups and Ca ions, and the desulfurizing characteristics of the Ca-carrying brown coal is determined. The Ca-carrying brown coal and high-sulfur coal char is mixed and incinerated in a fluidized bed reactor, and it is found that a desulfurization rate of 75% is achieved when the Ca/S ratio is 1 in the desulfurization of SO2. This rate is far higher than the rate obtained when limestone or cement sludge without preliminary treatment is used as a desulfurizer. Next, Ca-carrying brown coal and H2S are caused to react upon each other in a fixed bed reactor, and then it is found that desulfurization characteristics are not dependent on the diameter of the Ca-carrying brown coal grain, that the coal is different from limestone in that it stays quite active against H2S for long 40 minutes after the start of the reaction, and that CaO small in crystal diameter is dispersed in quantities into the char upon thermal disintegration of Ca-carrying brown coal to cause the coal to say quite active. 5 figs.
Liu, H.; Banville, D. L.; Basus, V. J.; James, T. L.
A method (termed CARNIVAL) for accurately determining distances from proton homonuclear rotating-frame Overhauser effect spectroscopy (ROESY) is described. The method entails an iterative calculation of the relaxation matrix using methodology introduced with the MARDIGRAS algorithm for analysis of two-dimensional nuclear Overhauser effect spectra (B. A. Borgias and T. L. James, J. Magn. Reson.87, 475, 1990). The situation is complicated in the case of ROESY as spectral peak intensities are influenced by resonance offset and contributions from homonuclear Hartmann-Hahn (HOHAHA) transfer if the nuclear spins are related by scalar coupling. The effects of spin-locking field strength on distance determinations and the ensuing distance errors incurred when HOHAHA corrections are made with limited knowledge of scalar ( J) coupling information have been evaluated using simulated ROESY intensities with a model peptide structure. It has been demonstrated that accurate distances can be obtained with little or no explicit knowledge of the homonuclear coupling constants over a moderate range of spin-locking field strengths. The CARNIVAL algorithm has been utilized to determine distances in a decapeptide using experimental ROESY data without measured coupling constants.
Energy Technology Data Exchange (ETDEWEB)
Kosminski, A.; Agnew, J.B. [Department of Chemical Engineering, University of Adelaide, South Australia, 5005 (Australia); Ross, D.P. [Department of Chemical Engineering, Tokyo University of Agriculture and Technology, BASE, Nakamachi 2-24-16 Koganei, Tokyo, 184-8588 (Japan)
2006-11-15
Thermodynamic equilibrium calculations were performed to determine the possible compositions and conditions for formation of potential liquid phases responsible for fluidised bed agglomeration during gasification of a high-sulphur low-rank coal from South Australia. The coals from this region of Australia are typically characterised by containing high levels of sodium, silica and sulphur. The transformation behaviour of the form of sodium present in coal, as either a carboxylate forming part of the coal organic matter or as soluble salt (NaCl) and its reaction with silicon compounds (silica or kaolin) is presented. The influence of temperature and gas atmosphere on equilibrium composition was evaluated. Thermodynamic equilibrium calculations show that the distribution of sodium among the produced species will depend on the form of sodium in the coal, the gas atmosphere and the forms in which silicon is present in the coal. Steam was found to have the most significant effect causing a lowering of the sodium carbonate melting point temperature. (author)
Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal
Energy Technology Data Exchange (ETDEWEB)
Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri,; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh
2012-11-30
This report describes the development of the design of an advanced dry feed system that was carried out under Task 4.0 of Cooperative Agreement DE-FE0007902 with the US DOE, “Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the use of Low- Rank Coal.” The resulting design will be used for the advanced technology IGCC case with 90% carbon capture for sequestration to be developed under Task 5.0 of the same agreement. The scope of work covered coal preparation and feeding up through the gasifier injector. Subcomponents have been broken down into feed preparation (including grinding and drying), low pressure conveyance, pressurization, high pressure conveyance, and injection. Pressurization of the coal feed is done using Posimetric1 Feeders sized for the application. In addition, a secondary feed system is described for preparing and feeding slag additive and recycle fines to the gasifier injector. This report includes information on the basis for the design, requirements for down selection of the key technologies used, the down selection methodology and the final, down selected design for the Posimetric Feed System, or PFS.
Directory of Open Access Journals (Sweden)
Ryan Wen Liu
2017-03-01
Full Text Available Dynamic magnetic resonance imaging (MRI has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.
Energy Technology Data Exchange (ETDEWEB)
Britt, P.F.; Buchanan, A.C. III; Kiddern, M.K.; Skeen, J.D. [Oak Ridge National Lab. Oak Ridge, TN (USA). Chemical Sciences Division
2003-07-01
In this study, the sealed tube pyrolysis of mixtures of m-phenylphenol and benzoic acid have been investigated at 400{sup o}C to determine if cross-linking reactions can occur, and to determine the low temperature pyrolysis pathways of aryl esters, which are not known. Initial studies show that condensation reactions occur between carboxylic acids and phenols to form aryl esters at temperatures as low as 200{sup o}C. With a 3:1 ratio of m-phenylphenol to benzoic acid, yields of m-phenylphenyl benzoate were as high as 50% at 400{sup o}C. At short reaction times, the dominant products were the aryl ester and benzene, formed by the acid catalyzed decarboxylation of benzoic acid, but at longer times, other arylated products grew in indicating that radical reactions were occurring. These products appear to arise from the induced decomposition of benzoic anhydride to form phenyl radicals. The thermal stability of aryl esters was investigated through the pyrolysis of phenyl benzoate at 400{sup o}C. As predicted, the aryl ester appeared to be thermally stable but hydrolytically unstable. In general, formation of aryl esters could act as a low temperature cross-link in low rank coals. 19 refs., 3 figs., 1 tab.
Liu, Ryan Wen; Shi, Lin; Yu, Simon Chun Ho; Xiong, Naixue; Wang, Defeng
2017-01-01
Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments. PMID:28273827
Directory of Open Access Journals (Sweden)
Mahidin Mahidin
2012-12-01
Full Text Available NOx and N2O emissions from coal combustion are claimed as the major contributors for the acid rain, photochemical smog, green house and ozone depletion problems. Based on the facts, study on those emissions formation is interest topic in the combustion area. In this paper, theoretical study by modeling and simulation on NOx and N2O formation in co-combustion of low-rank coal and palm kernel shell has been done. Combustion model was developed by using the principle of chemical-reaction equilibrium. Simulation on the model in order to evaluate the composition of the flue gas was performed by minimization the Gibbs free energy. The results showed that by introduced of biomass in coal combustion can reduce the NOx concentration in considerably level. Maximum NO level in co-combustion of low-rank coal and palm kernel shell with fuel composition 1:1 is 2,350 ppm, low enough compared to single low-rank coal combustion up to 3,150 ppm. Moreover, N2O is less than 0.25 ppm in all cases. Keywords: low-rank coal, N2O emission, NOx emission, palm kernel shell
Directory of Open Access Journals (Sweden)
Herviyanti Herviyanti
2013-01-01
Full Text Available The objective of this research was examined humic matter from low rank coal capability combined with P fertilizer to adsorp Al and Fe metal, to improve soil fertility, to increase of P fertilizing efficiency and productivity Oxisol, therefore optimalize productivity of corn can be achieved. The experiment was designed using a 3 x 4 factorial with 3 replications in design groups randomly. The 1st factor was 3 way incubating humic matter with P-fertilizer are : I1 = Incubation of humic matter 1 week, then incubation P-fertilizer 1 week; I2 = Incubation of humic matter and P fertilizer directly into the soil for 2 weeks; and I3 = humic matter and P fertilizer mixed for 1 week, then incubation to the soil for 1 week. The 2nd factor was humic matter and P-fertilizer combination are 4 doses H1=400 ppm (0.8 Mg ha-1+ 100% R; H2 = 400 ppm + 75% R; H3 = 800 ppm (1.6 Mg ha-1 + 100% R; and H4 = 800 ppm + 75% R. The results showed that the best treatment interaction was founded 800 ppm humic matter and 100% R P-fertilizer doses in the first 3 way incubation that is corn yields increased from 4.53 Mg ha-1 (control and 5.65 Mg ha -1 (farmer tradition to 9.21 Mg ha -1.. However, this result is almost the same as 800 ppm humic matter + 75 % R P-fertilizer doses incubation with followed I3 way too. It was concluded that addition of humic matter and I3 incubation could be P-fertilizer save up to 25%.
Directory of Open Access Journals (Sweden)
Zutao Zhang
2016-06-01
Full Text Available Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.
Lee, Dong-Wook; Bae, Jong-Soo; Lee, Young-Joo; Park, Se-Joon; Hong, Jai-Chang; Lee, Byoung-Hwa; Jeon, Chung-Hwan; Choi, Young-Chan
2013-02-05
Coal-fired power plants are facing to two major independent problems, namely, the burden to reduce CO(2) emission to comply with renewable portfolio standard (RPS) and cap-and-trade system, and the need to use low-rank coal due to the instability of high-rank coal supply. To address such unresolved issues, integrated gasification combined cycle (IGCC) with carbon capture and storage (CCS) has been suggested, and low rank coal has been upgraded by high-pressure and high-temperature processes. However, IGCC incurs huge construction costs, and the coal upgrading processes require fossil-fuel-derived additives and harsh operation condition. Here, we first show a hybrid coal that can solve these two problems simultaneously while using existing power plants. Hybrid coal is defined as a two-in-one fuel combining low rank coal with a sugar cane-derived bioliquid, such as molasses and sugar cane juice, by bioliquid diffusion into coal intrapores and precarbonization of the bioliquid. Unlike the simple blend of biomass and coal showing dual combustion behavior, hybrid coal provided a single coal combustion pattern. If hybrid coal (biomass/coal ratio = 28 wt %) is used as a fuel for 500 MW power generation, the net CO(2) emission is 21.2-33.1% and 12.5-25.7% lower than those for low rank coal and designed coal, and the required coal supply can be reduced by 33% compared with low rank coal. Considering high oil prices and time required before a stable renewable energy supply can be established, hybrid coal could be recognized as an innovative low-carbon-emission energy technology that can bridge the gulf between fossil fuels and renewable energy, because various water-soluble biomass could be used as an additive for hybrid coal through proper modification of preparation conditions.
Konakli, Katerina; Sudret, Bruno
2016-09-01
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the "curse of dimensionality", namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor-product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a
Energy Technology Data Exchange (ETDEWEB)
Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno
2016-09-15
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input
Matrix Recipes for Hard Thresholding Methods
2012-11-07
present below some characteristic examples for the linear operator A: Matrix Completion (MC): As a motivating example, consider the famous Netflix ...basis independent models from point queries via low-rank methods. Technical report, EPFL, 2012. [8] J. Bennett and S. Lanning. The netflix prize. In In
Matrix Coherence and the Nystrom Method
Talwalkar, Ameet
2010-01-01
The Nystrom method is an efficient technique to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence and connect matrix coherence to the performance of the Nystrom method. Making use of related work in the compressed sensing and the matrix completion literature, we derive novel coherence-based bounds for the Nystrom method in the low-rank setting. We then present empirical results that corroborate these theoretical bounds. Finally, we present more general empirical results for the full-rank setting that convincingly demonstrate the ability of matrix coherence to measure the degree to which information can be extracted from a subset of columns.
Cheng, Jiubing
2014-08-05
In elastic imaging, the extrapolated vector fields are decomposed into pure wave modes, such that the imaging condition produces interpretable images, which characterize reflectivity of different reflection types. Conventionally, wavefield decomposition in anisotropic media is costly as the operators involved is dependent on the velocity, and thus not stationary. In this abstract, we propose an efficient approach to directly extrapolate the decomposed elastic waves using lowrank approximate mixed space/wavenumber domain integral operators for heterogeneous transverse isotropic (TI) media. The low-rank approximation is, thus, applied to the pseudospectral extrapolation and decomposition at the same time. The pseudo-spectral implementation also allows for relatively large time steps in which the low-rank approximation is applied. Synthetic examples show that it can yield dispersionfree extrapolation of the decomposed quasi-P (qP) and quasi- SV (qSV) modes, which can be used for imaging, as well as the total elastic wavefields.
Halko, Nathan; Tropp, Joel A
2009-01-01
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that \\emph{randomization} offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed - either explicitly or implicitly - to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In ...
Institute of Scientific and Technical Information of China (English)
李璐; 董秋雷; 赵瑞珍
2015-01-01
Considering that data used in many applications are intrinsically in matrix form rather than in vector form, this paper focuses on the generalized version of the problem of a low-rank approximation of a matrix with missing components, i.e. low-rank approximations of a set of matrices with missing components. This generalized problem is formulated as an optimization problem at first, which minimizes the total reconstruction error of the known components in these matrices. Then, an iterative algorithm is designed for calculating the generalized low-rank approximations of matrices with missing components, called GLRAMMC. Finally, detailed algorithmic analysis is given. Extensive experimental results on synthetic data as well as on real image data show the effectiveness of our proposed algorithm.%针对在许多实际应用中数据以矩阵形式而非向量形式存在的问题, 重点讨论含缺失成分的矩阵低秩逼近问题的广义版本, 即如何对一组含缺失成分的矩阵进行低秩逼近. 首先构造一个最优化问题来表达原始的广义低秩逼近问题, 该最优化问题最小化输入矩阵组中已知成分的总重构误差; 然后提出了一种迭代优化算法来求解上述的最优化问题; 最后给出详细的算法分析. 大量的模拟实验与真实图像实验结果表明, 文中算法具有较好的性能.
Energy Technology Data Exchange (ETDEWEB)
Hang Wenhui; Wang ling; Li Shurong [China Coal Research Institute, Beijing (China)
1999-11-01
SO{sub 2} removal from flue gas by activated carbon and HNO{sub 3} treated activated carbon from Chinese low-rank coal was studied. SO{sub 2} adsorption on activated carbon is mainly chemisorption. There was shown to be a correlation between adsorption capacity and the number of active sites on the carbon surface. HNO{sub 3} treatment transforms C-H bonds in activated carbon into active sites, for removal of SO{sub 2}. 2 figs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Oki, A.; Xie, X.; Nakajima, T.; Maeda, S. [Kagoshima University, Kagoshima (Japan). Faculty of Engineering
1996-10-28
With an objective to learn mechanisms in low-rank coal reformation processes, change of properties on coal surface was discussed. Difficulty in handling low-rank coal is attributed to large intrinsic water content. Since it contains highly volatile components, it has a danger of spontaneous ignition. The hot water drying (HWD) method was used for reformation. Coal which has been dry-pulverized to a grain size of 1 mm or smaller was mixed with water to make slurry, heated in an autoclave, cooled, filtered, and dried in vacuum. The HWD applied to Loy Yang and Yallourn coals resulted in rapid rise in pressure starting from about 250{degree}C. Water content (ANA value) absorbed into the coal has decreased largely, with the surface made hydrophobic effectively due to high temperature and pressure. Hydroxyl group and carbonyl group contents in the coal have decreased largely with rising reformation treatment temperature (according to FT-IR measurement). Specific surface area of the original coal of the Loy Yang coal was 138 m{sup 2}/g, while it has decreased largely to 73 m{sup 2}/g when the reformation temperature was raised to 350{degree}C. This is because of volatile components dissolving from the coal as tar and blocking the surface pores. 2 refs., 4 figs.
Energy Technology Data Exchange (ETDEWEB)
Wu, Z.; Otsuka, Y. [Tohoku University, Sendai (Japan). Institute for Chemical Reaction Science
1996-10-28
In order to establish coal NOx preventive measures, discussions were given on formation of N2 in the fixed-bed pyrolysis of low rank coals and the mechanisms thereof. Chinese ZN coal and German RB coal were used for the discussions. Both coals do not produce N2 at 600{degree}C, and the main product is volatile nitrogen. Conversion into N2 does not depend on heating rates, but increases linearly with increasing temperature, and reaches 65% to 70% at 1200{degree}C. In contrast, char nitrogen decreases linearly with the temperature. More specifically, these phenomena suggest that the char nitrogen or its precursor is the major supply source of N2. When mineral substances are removed by using hydrochloric acid, their catalytic action is lost, and conversion into N2 decreases remarkably. Iron existing in ion-exchanged condition in low-rank coal is reduced and finely diffused into metallic iron particles. The particles react with heterocyclic nitrogen compounds and turn into iron nitride. A solid phase reaction mechanism may be conceived, in which N2 is produced due to decomposition of the iron nitride. 5 refs., 4 figs., 1 tab.
Hydrothermal extraction and gasification of low rank coal with catalyst Al2O3 and Pd/Al2O3
Fachruzzaki, Handayani, Ismi; Mursito, Anggoro Tri
2017-01-01
Increasing coal quality is very important in order to utilize low-rank coal. This research is attempted to increase the quality of low-rank coal using hydrothermal process with hot compressed water (HCW) at 200 °C and 3 MPa. The product from this process were solid residue and liquid filtrate with organic component. Product from gasification of the filtrate was synthetic gas. The result showed that higher water flow rate could increase organic component in the filtrate. When a catalyst was used, the extraction process was faster, the organic component in the filtrate was increased while its content was decreased in the residue. Fourier transform infrared spectroscopy (FTIR) analysis indicated that coal extraction using HCW was more effective with catalyst Pd/Al2O3. Increasing the process temperature will increase the amounts CO and H2 gas. In this research, highest net heating value at 800°C using K2CO3 solution and Pd/Al2O3 catalyst was 17,774.36 kJ/kg. The highest cold gas efficiency was 91.29% and the best carbon conversion was 34.78%.
Li, Hailong; Wu, Chang-Yu; Li, Ying; Zhang, Junying
2011-09-01
CeO(2)-TiO(2) (CeTi) catalysts synthesized by an ultrasound-assisted impregnation method were employed to oxidize elemental mercury (Hg(0)) in simulated low-rank (sub-bituminous and lignite) coal combustion flue gas. The CeTi catalysts with a CeO(2)/TiO(2) weight ratio of 1-2 exhibited high Hg(0) oxidation activity from 150 to 250 °C. The high concentrations of surface cerium and oxygen were responsible for their superior performance. Hg(0) oxidation over CeTi catalysts was proposed to follow the Langmuir-Hinshelwood mechanism whereby reactive species from adsorbed flue gas components react with adjacently adsorbed Hg(0). In the presence of O(2), a promotional effect of HCl, NO, and SO(2) on Hg(0) oxidation was observed. Without O(2), HCl and NO still promoted Hg(0) oxidation due to the surface oxygen, while SO(2) inhibited Hg(0) adsorption and subsequent oxidation. Water vapor also inhibited Hg(0) oxidation. HCl was the most effective flue gas component responsible for Hg(0) oxidation. However, the combination of SO(2) and NO without HCl also resulted in high Hg(0) oxidation efficiency. This superior oxidation capability is advantageous to Hg(0) oxidation in low-rank coal combustion flue gas with low HCl concentration.
An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors
Xu, Yangyang; Wen, Zaiwen; Zhang, Yin
2011-01-01
This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a non-convex constrained least-squares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hypers...
Kwak, Hye-Lim; Han, Sun-Kyung; Park, Sunghoon; Park, Si Hong; Shim, Jae-Yong; Oh, Mihwa; Ricke, Steven C; Kim, Hae-Yeong
2015-09-01
Previous detection methods for Citrobacter are considered time consuming and laborious. In this study, we have developed a rapid and accurate detection method for Citrobacter species in pork products, using matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS). A total of 35 Citrobacter strains were isolated from 30 pork products and identified by both MALDI-TOF MS and 16S rRNA gene sequencing approaches. All isolates were identified to the species level by the MALDI-TOF MS, while 16S rRNA gene sequencing results could not discriminate them clearly. These results confirmed that MALDI-TOF MS is a more accurate and rapid detection method for the identification of Citrobacter species.
Calvo, F; Falvo, Cyril; Parneix, Pascal
2013-01-21
An explicit polarizable potential for the naphthalene-argon complex has been derived assuming only atomic contributions, aiming at large scale simulations of naphthalene under argon environment. The potential was parametrized from dedicated quantum chemical calculations at the CCSD(T) level, and satisfactorily reproduces available structural and energetic properties. Combining this potential with a tight-binding model for naphthalene, collisional energy transfer is studied by means of dedicated molecular dynamics simulations, nuclear quantum effects being accounted for in the path-integral framework. Except at low target temperature, nuclear quantum effects do not alter the average energies transferred by the collision or the collision duration. However, the distribution of energy transferred is much broader in the quantum case due to the significant zero-point energy and the higher density of states. Using an ab initio potential for the Ar-Ar interaction, the IR absorption spectrum of naphthalene solvated by argon clusters or an entire Ar matrix is computed via classical and centroid molecular dynamics. The classical spectra exhibit variations with growing argon environment that are absent from quantum spectra. This is interpreted by the greater fluxional character experienced by the argon atoms due to vibrational delocalization.
Matrix Recipes for Hard Thresholding Methods
Kyrillidis, Anastasios
2012-01-01
Given a set of possibly corrupted and incomplete linear measurements, we leverage low-dimensional models to best explain the data for provable solution quality in inversion. A non-exhaustive list of examples includes sparse vector and low-rank matrix approximation. Most of the well-known low dimensional models are inherently non-convex. However, recent approaches prefer convex surrogates that "relax" the problem in order to establish solution uniqueness and stability. In this paper, we tackle the linear inverse problems revolving around low-rank matrices by preserving their non-convex structure. To this end, we present and analyze a new set of sparse and low-rank recovery algorithms within the class of hard thresholding methods. We provide strategies on how to set up these algorithms via basic "ingredients" for different configurations to achieve complexity vs. accuracy tradeoffs. Moreover, we propose acceleration schemes by utilizing memory-based techniques and randomized, $\\epsilon$-approximate, low-rank pr...
Matrix Completion from Noisy Entries
Keshavan, Raghunandan H; Oh, Sewoong
2009-01-01
Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the `Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan et al.(2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.
Accurate Analysis Method on Tangent Stiffness Matrix for Space Beam Element%空间梁单元切线刚度矩阵的精确分析方法
Institute of Scientific and Technical Information of China (English)
刘树堂
2014-01-01
为有效进行空间刚架结构后屈曲分析，提出一种新的空间梁单元切线刚度矩阵的精确分析方法。首先用直接法建立梁单元杆端力与杆端位移的增量关系式，然后根据矩阵微分理论求出单元杆端力关于杆端位移的导数，在求导结果表达式中令杆端位移增量为0，即可得到梁单元切线刚度矩阵。对六层和二十层空间刚架结构进行了后屈曲分析。结果表明：所得的空间梁单元切线刚度矩阵具有足够精度，可有效用于大型空间刚架结构的后屈曲分析。%In order to effectively conduct the post‐buckling analysis for space frame , a new accurate analysis method for the tangent stiffness matrix of space beam element was proposed . Firstly ,the incremental force and displacement of the member ends for space beam element was established using direct equilibrium method ,and then derivation of the member‐end force was determined with regard to the member‐end displacement according to the matrix differentiation theory and the increment of member‐end displacement of the derivation expression was set equal to zero ,so that the tangent stiffness matrix for space beam element was obtained .The post‐buckling analyses for a six‐storey space frame and a twenty‐storey frame were done .The results show that the present tangent stiffness matrix for space beam element has enough precision ,and can be applied to the post‐buckling analysis for large space frame .
Zhang, Haixia; Zhang, Yukui; Zhu, Zhiping; Lu, Qinggang
2016-08-01
To promote the utilization efficiency of coal resources, and to assist with the control of sulphur during gasification and/or downstream processes, it is essential to gain basic knowledge of sulphur transformation associated with gasification performance. In this research we investigated the influence of O2/C molar ratio both on gasification performance and sulphur transformation of a low rank coal, and the sulphur transformation mechanism was also discussed. Experiments were performed in a circulating fluidized bed gasifier with O2/C molar ratio ranging from 0.39 to 0.78 mol/mol. The results showed that increasing the O2/C molar ratio from 0.39 to 0.78 mol/mol can increase carbon conversion from 57.65% to 91.92%, and increase sulphur release ratio from 29.66% to 63.11%. The increase of O2/C molar ratio favors the formation of H2S, and also favors the retained sulphur transforming to more stable forms. Due to the reducing conditions of coal gasification, H2S is the main form of the released sulphur, which could be formed by decomposition of pyrite and by secondary reactions. Bottom char shows lower sulphur content than fly ash, and mainly exist as sulphates. X-ray photoelectron spectroscopy (XPS) measurements also show that the intensity of pyrite declines and the intensity of sulphates increases for fly ash and bottom char, and the change is more obvious for bottom char. During CFB gasification process, bigger char particles circulate in the system and have longer residence time for further reaction, which favors the release of sulphur species and can enhance the retained sulphur transforming to more stable forms.
Energy Technology Data Exchange (ETDEWEB)
Izquierdo, M.T.; Rubio, B. [Departamento de Energia y Medio Ambiente, Instituto de Carboquimica (CSIC), C/Maria de Luna, 12, 50015 Zaragoza (Spain); Mayoral, C.; Andres, J.M. [Departamento de Procesos Quimicos, Instituto de Carboquimica (CSIC), C/Maria de Luna, 12, 50015 Zaragoza (Spain)
2001-10-25
The effectiveness of carbons as low-temperature selective catalytic reduction (SCR) catalysts will depend upon their physical and chemical properties. Surface functional groups containing oxygen are closely related to the catalytic activity of carbons. These groups are expected to change the interaction between the carbon surface and the reactants through a variation in adsorption and reaction characteristics. This paper presents a more detailed study of the effects of either gas-phase sulfuric acid or oxygen oxidation treatments on the catalytic NO reduction by low-rank coal-based carbon catalysts. Raw and treated carbons were characterized by N{sub 2} and CO{sub 2} surface areas, TPD and ash content. NO removal capacity of carbons was determined by passing a flow containing NO, H{sub 2}O, O{sub 2}, NH{sub 3} and N{sub 2} through a fixed bed of carbon at 150C and 4s of residence time, the effluent concentration being monitored continuously during the reaction. The effects of varying the type and conditions of the treatment on the physicochemical features of carbons were studied. The gas-phase sulfuric acid treatment (corresponding to a first step SO{sub 2} removal) markedly enhanced carbon activities for NO removal. On the contrary, oxygen oxidation enhanced NO removal capacity of chars to a lower extent. Therefore, the carbons studied could be used in a combined SO{sub 2}/NO removal process, because the use and regeneration of the carbon in the first step is beneficial for the performance in the second one.
Directory of Open Access Journals (Sweden)
Ivan Gregor
2016-02-01
Full Text Available Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into ‘bins’ representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies ‘training’ sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S software. The new (+ component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4–6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.
Accurate ab initio spin densities
Boguslawski, Katharina; Legeza, Örs; Reiher, Markus
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...
Link Prediction via Matrix Completion
Pech, Ratha; Pan, Liming; Cheng, Hong; Zhou, Tao
2016-01-01
Inspired by practical importance of social networks, economic networks, biological networks and so on, studies on large and complex networks have attracted a surge of attentions in the recent years. Link prediction is a fundamental issue to understand the mechanisms by which new links are added to the networks. We introduce the method of robust principal component analysis (robust PCA) into link prediction, and estimate the missing entries of the adjacency matrix. On one hand, our algorithm is based on the sparsity and low rank property of the matrix, on the other hand, it also performs very well when the network is dense. This is because a relatively dense real network is also sparse in comparison to the complete graph. According to extensive experiments on real networks from disparate fields, when the target network is connected and sufficiently dense, whatever it is weighted or unweighted, our method is demonstrated to be very effective and with prediction accuracy being considerably improved comparing wit...
Energy Technology Data Exchange (ETDEWEB)
Armbruster, L.; Eichholtz, P.; Suedhofer, F. [Deutsche Steinkohle AG, Herne (Germany). Hauptabteilung BA
2001-10-01
Waterinfusion in to coal before winning is an obligatory measure to reduce dust, both with a view to health protection and to fire and explosion safety. The effect of infusion is, however, greater in high-rank seams than in low-rank ones. The highly effective dust control measures applied when winning coal are now causing the infusion effect to recede. It is now possible to dispense with this in low-rank seam sections. In operational trials in the P and Erda seams it has been demonstrated that, when modern dust control methods are used, there is no longer any evidence of the infusion effect in the mine. Infusion can now be dispensed with in seams above seam P in stripping winning, provided that the official mining regulations on ensuring lower dust concentrations are observed. (orig.)
Hierarchical matrix techniques for the solution of elliptic equations
Chávez, Gustavo
2014-05-04
Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major applications of this technology, but they can be applied into other areas where low-rank properties exist. Such is the case of the Block Cyclic Reduction algorithm, which is used as a direct solver for the constant-coefficient Poisson quation. We explore the variable-coefficient case, also using Block Cyclic reduction, with the addition of Hierarchical Matrices to represent matrix blocks, hence improving the otherwise O(N2) algorithm, into an efficient O(N) algorithm.
Sparse Planar Array Synthesis Using Matrix Enhancement and Matrix Pencil
Directory of Open Access Journals (Sweden)
Mei-yan Zheng
2013-01-01
Full Text Available The matrix enhancement and matrix pencil (MEMP plays important roles in modern signal processing applications. In this paper, MEMP is applied to attack the problem of two-dimensional sparse array synthesis. Firstly, the desired array radiation pattern, as the original pattern for approximating, is sampled to form an enhanced matrix. After performing the singular value decomposition (SVD and discarding the insignificant singular values according to the prior approximate error, the minimum number of elements can be obtained. Secondly, in order to obtain the eigenvalues, the generalized eigen-decomposition is employed on the approximate matrix, which is the optimal low-rank approximation of the enhanced matrix corresponding to sparse planar array, and then the ESPRIT algorithm is utilized to pair the eigenvalues related to each dimension of the planar array. Finally, element positions and excitations of the sparse planar array are calculated according to the correct pairing of eigenvalues. Simulation results are presented to illustrate the effectiveness of the proposed approach.
2015-08-24
exploits the PI’s results from random matrix theory to improve denoising performance relative to the truncated SVD in the moderate to low SNR regime...model and includes missing data settings. These insights have led to the development of new data-driven algorithms for low-rank matrix denoising that...provably outperform PCA (or truncated SVD ) based techniques and other convex relaxation based schemes. Motivation: The truncated singular value
A Simpler Approach to Matrix Completion
Recht, Benjamin
2009-01-01
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low rank matrix. These results improve on prior work by Candes and Recht, Candes and Tao, and Keshavan, Montanari, and Oh. The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.
High-dimensional covariance matrix estimation with missing observations
Lounici, Karim
2012-01-01
In this paper, we study the problem of high-dimensional approximately low-rank covariance matrix estimation with missing observations. We propose a simple procedure computationally tractable in high-dimension and that does not require imputation of the missing data. We establish non-asymptotic sparsity oracle inequalities for the estimation of the covariance matrix with the Frobenius and spectral norms, valid for any setting of the sample size and the dimension of the observations. We further establish minimax lower bounds showing that our rates are minimax optimal up to a logarithmic factor.
Institute of Scientific and Technical Information of China (English)
石开仪; 陶秀祥; 籍永华; 任强; 舒伟; 严毅
2013-01-01
白腐真菌在低阶煤炭深加工方面有很大的应用潜力。为依据低阶煤降解机理，报道以喹啉作为低阶煤含氮模型化合物，利用白腐真菌在DOX培养基中对其进行微生物降解，通过测定降解体系中木质素过氧化物酶（LiP）、锰过氧化物酶（MnP）、漆酶（Lac）和多酚氧化酶（PPO）的活性，并利用红外光谱仪、气相色谱-质谱分析仪分析了降解产物。结果发现，喹啉在Lac、LiP和PPO等共同催化下发生了羟基化、醌基化、开环、C-N断裂、氧化等作用，得到对应的8种主要降解产物，从而推测出白腐真菌对喹啉的降解历程。%White rot fungi has great potential applications in low rank coal deep processing. To study degrada-tion mechanism of low rank coal, quinoline has been used as low rank coal model compound, and degraded by white rot fungi in DOX culture. Enzymes containing lignin peroxidase (LiP), manganese peroxidase (MnP), laccase (Lac) and polyphenol oxidase (PPO) were detected in degradation systems, and degraded products were detected by FT-IR and GC-MS analyzer. Results show that Lac, LiP and PPO catalyze quino-line occur hydroxylation, quinonylation, ring opening, C-Nbreakage and oxidation reactions to receive 8 cor-responding degraded products. Thus, degradation mechanism of quinoline by white rot fungi was speculated.
Topics in Matrix Sampling Algorithms
Boutsidis, Christos
2011-01-01
We study three fundamental problems of Linear Algebra, lying in the heart of various Machine Learning applications, namely: 1)"Low-rank Column-based Matrix Approximation". We are given a matrix A and a target rank k. The goal is to select a subset of columns of A and, by using only these columns, compute a rank k approximation to A that is as good as the rank k approximation that would have been obtained by using all the columns; 2) "Coreset Construction in Least-Squares Regression". We are given a matrix A and a vector b. Consider the (over-constrained) least-squares problem of minimizing ||Ax-b||, over all vectors x in D. The domain D represents the constraints on the solution and can be arbitrary. The goal is to select a subset of the rows of A and b and, by using only these rows, find a solution vector that is as good as the solution vector that would have been obtained by using all the rows; 3) "Feature Selection in K-means Clustering". We are given a set of points described with respect to a large numbe...
低变质煤-循环煤气微波共热解研究%Study on Microwave Co-Pyrolysis of Low Rank Coal and Circulating Coal Gas
Institute of Scientific and Technical Information of China (English)
周军; 杨哲; 刘晓峰; 吴雷; 田宇红; 赵西成
2016-01-01
低变质煤干馏热解生产兰炭、煤焦油、煤气被认为是其清洁高效转化利用的最佳途径.现有主流生产工艺普遍对原煤具有一定的粒度要求 ,煤焦油产量较低、质量不高 ,煤气中 H2 ,CH4 ,CO等有效组分含量较低.为进一步提高低变质煤热解时煤焦油收率和质量 ,提出将微波热解低变质煤产生的煤气循环通入微波热解反应器中 ,进行低变质煤-循环煤气微波共热解.结合FTIR及GC-MS等对热解产品的分析表征 ,系统考察了微波功率、热解时间、煤样粒度对热解产品收率及组成的影响.研究结果表明:低变质煤在循环煤气流量为0. 4 L · min-1 、微波功率为800 W、热解时间为40 min、煤样粒度为5~10 mm的工艺条件下热解 ,所得固体产品兰炭收率达62. 2% ,液体产品(煤焦油和热解水)收率达26. 8% .不同微波功率及热解时间下所得兰炭红外谱线基本重合 ;不同粒度煤样热解所得兰炭中—OH,C O, C C 和C—O官能团含量差别较大.提高微波功率、延长热解时间、减小煤样粒度均有利于煤焦油的轻质化.%The pyrolysis of low rank coal to produce bluecoke ,coal tar and gas is considered to be the optimal method to realize its clean and efficient utilization .However ,the current mainstream pyrolysis production technology generally has a certain parti-cle size requirements for raw coal ,resulting in lower yield and poorer quality of coal tar ,lower content of effective components in coal gas such as H2 ,CH4 ,CO ,etc .To further improve the yield of coal tar obtained from the pyrolysis of low rank coal and ex-plore systematically the effect of microwave power ,pyrolysis time and particle size of coal samples on the yield and composition of microwave pyrolysis products of low rank coal through the analysis and characterization of products with FTIR and GC-MS , introducing microwave pyrolysis of low rank coal into the microwave pyrolysis reactor circularly
Galler, Patrick; Limbeck, Andreas; Boulyga, Sergei F; Stingeder, Gerhard; Hirata, Takafumi; Prohaska, Thomas
2007-07-01
This work introduces a newly developed on-line flow injection (FI) Sr/Rb separation method as an alternative to the common, manual Sr/matrix batch separation procedure, since total analysis time is often limited by sample preparation despite the fast rate of data acquisition possible by inductively coupled plasma-mass spectrometers (ICPMS). Separation columns containing approximately 100 muL of Sr-specific resin were used for on-line FI Sr/matrix separation with subsequent determination of (87)Sr/(86)Sr isotope ratios by multiple collector ICPMS. The occurrence of memory effects exhibited by the Sr-specific resin, a major restriction to the repetitive use of this costly material, could successfully be overcome. The method was fully validated by means of certified reference materials. A set of two biological and six geological Sr- and Rb-bearing samples was successfully characterized for its (87)Sr/(86)Sr isotope ratios with precisions of 0.01-0.04% 2 RSD (n = 5-10). Based on our measurements we suggest (87)Sr/(86)Sr isotope ratios of 0.713 15 +/- 0.000 16 (2 SD) and 0.709 31 +/- 0.000 06 (2 SD) for the NIST SRM 1400 bone ash and the NIST SRM 1486 bone meal, respectively. Measured (87)Sr/(86)Sr isotope ratios for five basalt samples are in excellent agreement with published data with deviations from the published value ranging from 0 to 0.03%. A mica sample with a Rb/Sr ratio of approximately 1 was successfully characterized for its (87)Sr/(86)Sr isotope signature to be 0.718 24 +/- 0.000 29 (2 SD) by the proposed method. Synthetic samples with Rb/Sr ratios of up to 10/1 could successfully be measured without significant interferences on mass 87, which would otherwise bias the accuracy and uncertainty of the obtained data.
A Singular Value Thresholding Algorithm for Matrix Completion
Cai, Jian-Feng; Shen, Zuowei
2008-01-01
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices (X^k, Y^k) and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding o...
IMP: A Message-Passing Algorithmfor Matrix Completion
Kim, Byung-Hak; Pfister, Henry D
2010-01-01
A new message-passing (MP) method is considered for the matrix completion problem associated with recommender systems. We attack the problem using a (generative) factor graph model that is related to a probabilistic low-rank matrix factorization. Based on the model, we propose a new algorithm, termed IMP, for the recovery of a data matrix from incomplete observations. The algorithm is based on a clustering followed by inference via MP (IMP). The algorithm is compared with a number of other matrix completion algorithms on real collaborative filtering (e.g., Netflix) data matrices. Our results show that, while many methods perform similarly with a large number of revealed entries, the IMP algorithm outperforms all others when the fraction of observed entries is small. This is helpful because it reduces the well-known cold-start problem associated with collaborative filtering (CF) systems in practice.
SparRec: An effective matrix completion framework of missing data imputation for GWAS
Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen
2016-10-01
Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.
Cho, Youngjae; Kim, Eiseul; Lee, Yoonju; Han, Sun-Kyung; Kim, Chang-Gyeom; Choo, Dong-Won; Kim, Young-Rok; Kim, Hae-Yeong
2017-04-01
Pediococci are halophilic lactic acid bacteria, within the family Lactobacillaceae, which are involved in the fermentation of various salted and fermented foods, such as kimchi and jeotgal. In this study, a matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS method was developed for the rapid identification of species of the genus Pediococcus. Of the 130 Pediococcus spectra aligned with the Biotyper taxonomy database, 122 isolates (93.9 %) yielded log scores genus Pediococcus, all of the isolates were correctly identified, of which 84 (64.6 %) and 46 (35.4 %) were identified at the species and genus level, respectively. In comparing food origins, no relationship was found between the bacterial characteristics and food environment. We were able to produce a Biotyper system for identification of members of the genus Pediococcus with locally extended Pediococcus reference strains. The MALDI-TOF MS method is fast, simple and reliable for discriminating between species in the genus Pediococcus and therefore will be useful for quality control in determining the spoilage of alcoholic beverages or in the production of fermented food.
Performance of Low-rank STAP detectors
Anitori, L.; Srinivasan, R.; Rangaswamy, M.
2008-01-01
In this paper the STAP detector based on the lowrank approximation of the normalized adaptive matched filter (LRNAMF) is investigated for its false alarm probability (FAP) performance. An exact formula for the FAP of the LRNAMF detector is derived using the g-method estimator [4]. The non CFAR behav
Low Rank Sparse Coding for Image Classification
2013-12-08
Singapore 4 Institute of Automation, Chinese Academy of Sciences, P. R. China 5 University of Illinois at Urbana-Champaign, Urbana, IL USA Abstract In this...coding [36]. 1. Introduction The bag-of-words (BoW) model is one of the most pop - ular models for feature design. It has been successfully applied to...defense of softassignment coding. In ICCV, 2011. [26] S. Liu, J. Feng, Z. Song , T. Zhang, H. Lu, C. Xu, and S. Yan. Hi, magic closet, tell me what to
Extensions of linear-quadratic control, optimization and matrix theory
Jacobson, David H
1977-01-01
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat
Structure Analysis of Network Traffic Matrix Based on Relaxed Principal Component Pursuit
Wang, Zhe; Xu, Ke; Yin, Baolin
2011-01-01
The network traffic matrix is a kind of flow-level Internet traffic data and is widely applied to network operation and management. It is a crucial problem to analyze the composition and structure of traffic matrix; some mathematical approaches such as Principal Component Analysis (PCA) were used to handle that problem. In this paper, we first argue that PCA performs poorly for analyzing traffic matrixes polluted by large volume anomalies, then propose a new composition model of the network traffic matrix. According to our model, structure analysis can be formally defined as decomposing a traffic matrix into low-rank, sparse, and noise sub-matrixes, which is equal to the Robust Principal Component Analysis (RPCA) problem defined in [13]. Based on the Relaxed Principal Component Pursuit (Relaxed PCP) method and the Accelerated Proximal Gradient (APG) algorithm, an iterative algorithm for decomposing a traffic matrix is presented, and our experiment results demonstrate its efficiency and flexibility. At last, f...
Phase diagram of matrix compressed sensing
Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka
2016-12-01
In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.
An entropy-driven matrix completion (E-MC) approach to complex network mapping
Koochakzadeh, Ali; Pal, Piya
2016-05-01
Mapping the topology of a complex network in a resource-efficient manner is a challenging problem with applications in internet mapping, social network inference, and so forth. We propose a new entropy driven algorithm leveraging ideas from matrix completion, to map the network using monitors (or sensors) which, when placed on judiciously selected nodes, are capable of discovering their immediate neighbors. The main challenge is to maximize the portion of discovered network using only a limited number of available monitors. To this end, (i) a new measure of entropy or uncertainty is associated with each node, in terms of the currently discovered edges incident on that node, and (ii) a greedy algorithm is developed to select a candidate node for monitor placement based on its entropy. Utilizing the fact that many complex networks of interest (such as social networks), have a low-rank adjacency matrix, a matrix completion algorithm, namely 1-bit matrix completion, is combined with the greedy algorithm to further boost its performance. The low rank property of the network adjacency matrix can be used to extrapolate a portion of missing edges, and consequently update the node entropies, so as to efficiently guide the network discovery algorithm towards placing monitors on the nodes that can turn out to be more informative. Simulations performed on a variety of real world networks such as social networks and peer networks demonstrate the superior performance of the matrix-completion guided approach in discovering the network topology.
Franklin, Joel N
2003-01-01
Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.
On Some Extended Block Krylov Based Methods for Large Scale Nonsymmetric Stein Matrix Equations
Directory of Open Access Journals (Sweden)
Abdeslem Hafid Bentbib
2017-03-01
Full Text Available In the present paper, we consider the large scale Stein matrix equation with a low-rank constant term A X B − X + E F T = 0 . These matrix equations appear in many applications in discrete-time control problems, filtering and image restoration and others. The proposed methods are based on projection onto the extended block Krylov subspace with a Galerkin approach (GA or with the minimization of the norm of the residual. We give some results on the residual and error norms and report some numerical experiments.
Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression
Halim Boukaram, Wajih
2017-09-14
We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.
Institute of Scientific and Technical Information of China (English)
任其龙
2016-01-01
The production of basic chemical materials including calcium carbide and acetylene is an important approach to realizing the value-added conversion of low-rank coal. However,current production technologies usually suffer from many problems including severe pollution,large energy consumption and high cost. This project is intended to reveal the basic science in the processes of mass transfer and chemical conversion under the extreme conditions in an electric field-coupling reaction system,and give deep understanding of the principles of process control,scaling-up and energy recovery. Then,many key technologies in the production are expected to be greatly improved,including the technology of industrializing the rotating arc plasma torch, the power supply with large current for hydrogen plasma and the arc starting at low voltage,the design of long-life electrode and the ablative compensation technology,the low-cost molding of powdery raw materials and the high-temperature transportation of materials. Based on these achievements,a highly efficient, energy-saving and low-cost process for the production of calcium carbide and acetylene from low-rank coal will be developed,and a 5000t/a demonstration plant of producing acetylene from coal by plasma pyrolysis and a 800000t/a industrial plant of the regenerative production of calcium carbide will be established.%制备电石、乙炔等基础化工原料是实现低阶煤高值转化的重要技术途径，现有技术普遍存在高污染、高能耗、高成本等问题。本文介绍了国家重点研发计划项目（2016YFB0301800）拟通过揭示低阶煤在电场耦合反应体系极端条件下物质传递与转化的科学规律，深入认识反应过程调控、放大和能量回收原理，突破旋转弧等离子炬工程化技术、大电流氢等离子体电源与低电压启弧技术、长寿命电极设计及烧蚀补偿技术、低成本粉状原料成型及高温固体物料输送技术等关键技术
Speaking Fluently And Accurately
Institute of Scientific and Technical Information of China (English)
JosephDeVeto
2004-01-01
Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.
Directory of Open Access Journals (Sweden)
NELSON VALERO VALERO
2012-05-01
Full Text Available Se aislaron bacterias con actividad biotransformadora de carbón de bajo rango (CBR a partir de muestras ambientales con presencia de residuos de carbón en la mina "El Cerrejón". Se aislaron 75 morfotipos bacterianos de los cuales 32 presentaron crecimiento en medio sólido mínimo de sales con carbón a 5 %. Se diseño un protocolo para la selección de los morfotipos con mayor actividad biotransformadora de CBR, el protocolo incluye el aislamiento en un medio selectivo con CBR en polvo, pruebas cualitativas y cuantitativas de solubilización de CBR en medios sólidos y líquido. El mecanismo de solubilización en las cepas que producen mayores valores de sustancias húmicas (SH estuvo asociado a cambios de pH en el medio, probablemente por la producción de sustancias alcalinas extracelulares. El mayor número de aislamientos y los aislamientos con mayor actividad solubilizadora sobre el CBR provienen de lodo con alto contenido de residuos de carbón y las rizósferas de Typha domingensis y Cenchrus ciliaris que crecen sobre sedimentos mezclados con partículas de carbón, este resultado sugiere que la obtención y capacidad de solubilización de CBR por parte de bacterias puede estar relacionada con el microhábitat donde se desarrollan las poblaciones.Bacteria capable of low rank coal (LRC biotransform were isolated from environmental samples altered with coal in the mine "The Cerrejon". A protocol was designed to select strains more capable of LRC biotransform, the protocol includes isolation in a selective medium with LRC powder, qualitative and quantitative tests for LRC solubilization in solid and liquid culture medium. Of 75 bacterial strains isolated, 32 showed growth in minimal salts agar with 5 % carbon. The strains that produce higher values of humic substances (HS have a mechanism of solubilization associated with pH changes in the culture medium, probably related to the production of extracellular alkaline substances by bacteria
Institute of Scientific and Technical Information of China (English)
黄山秀; 马名杰
2013-01-01
选用煤焦油沥青、高黏结肥煤作为热黏结剂,分别以不同的掺入量和低阶烟煤粉煤及其他原料混合制取型煤.型煤样品热强度测定结果表明:以煤焦油沥青为热黏结剂的型煤热强度高于以高黏结肥煤为热黏结剂的型煤热强度,进一步对型煤微观结构电镜分析也证实了以煤焦油沥青为热黏结剂的型煤其黏结性能和防水性相对较好,电镜切片表明,煤焦油沥青热态下析出的挥发分经过胶质体时产生的气泡相互作用能使胶质体受压形成更坚固的整体网状结构；研究还发现煤焦油沥青的粒度对型煤热强度也有一定的影响.%Two different materials such as coal tar pitch, fat coal with high adhesion were chosen as the heat binder, which were mixed into the powder of low rank bituminous coals from Shenmu or Yuzhou and other raw materials to make coal briquette in different ration. The determination results show that the thermal strength of coal briquette with tar pitch is higher than that of coal briquette with fat coal. Further analysis of micro-structure of coal briquette by electron microscopic also verifies that the cohesion and water resistance of coal briquette with tar pitch is stronger, and it is because that interaction of air bubble poduced by volatile separated under hot state through colloid pressures the colloid and form the stronger net-structure on coal granules surface. Moreover, the particle size of tar pitch also have certain effect on thermal strength of coal briquette.
生物质与低阶煤低温共热解转化研究%STUDY ON LOW TEMPERATURE CO-PYROLYSIS OF BIOMASS AND LOW RANK COAL
Institute of Scientific and Technical Information of China (English)
何选明; 潘叶; 陈康; 吴梁森
2012-01-01
将野生浮萍与长焰煤以不同比例掺混,采用自行设计的煤干馏实验装置进行生物质与煤共热解实验,对液体产物煤焦油进行GC-MS分析,以探索生物质与煤低温共热解的反应及煤焦油轻质化规律.同时采用热重分析仪,探讨生物质添加对煤热解过程的影响机理.结果表明,随着混合样品中生物质量的增加,焦油收率增大10％左右,焦油中直链烷烃及高附加值的萘、酚和芴等化合物得到一定的富集,实现了低温煤焦油轻质化的目的.样品失重率增大,TG曲线向低温区移动,热解活化能逐渐减小,长焰煤、生物质及其混合物热分解动力学模型符合准一级动力学方程,两者的掺混促进了整个反应的进行.%Co-pyrolysis characteristics of low rank coal mixed with biomass(duckweed )in different proportions were studied in a dry distillation equipment, and focusing on the coal tar of the product with GC-MS in order to investigate the reaction mechanism of the co-pyrolysis between biomass and coal. Furthermore, the research studied on the pyrolysis mechanism with bio-mass added by thermogravimetric analyzer. The results show that low-temperature tar could be upgraded with the increasing of biomass content, straight chain alkanes and high-value chemicals such as naphthalene, phenol, anthracene were enriched. The biomass can do favor to the pyrolysis process of coal by reducing the temperature of coaPs pyrolysis and active energy(E), The co-pyrolysis process belongs to first-order kinetic reaction, and the synergetic effect was found during coal and biomass co-prolysis by comparing with the individual pyrolysis.
Institute of Scientific and Technical Information of China (English)
马坚伟; 徐杰; 鲍跃全; 于四伟
2012-01-01
压缩感知(或称压缩采样)是国际上近期出现的一种信息理论.其核心思想是只要某高维信号是可压缩的或在某个变换域上具有稀疏性,就可以用一个与变换基不相关的测量矩阵将该信号投影到一个低维空间上,然后通过求解一个最优化问题以较高的概率从这些少量的投影中重构出原始信号.压缩感知理论突破了香农定理对信号采样频率的限制,能够以较少的采样资源,较高的采样速度和较低的软硬件复杂度获得原始信号的测量值.该理论已经被广泛应用于数字相机、医学成像、遥感成像、地震勘探、多媒体混合编码、通讯、结构健康监测等领域.本文归纳了压缩感知研究中的关键问题,探讨压缩感知从稀疏约束到低秩约束优化的发展历程,对压缩感知在遥感、地震勘探等几个相关领域的应用研究进行了综述.%Compressive sensing/compressive sampling (CS) is a novel information theory proposed recently. CS provides a new sampling theory to reduce data acquisition, which says that sparse or compressible signals can be exactly reconstructed from highly incomplete random sets of measurements. CS broke through the restrictions of the Shannon theorem on the sampling frequency, which can use fewer sampling resources, higher sampling rate and lower hardware and software complexity to obtain the required measurements. CS has been used widely in many fields including digital cameras, medical imaging, remote sensing, seismic exploration, multimedia hybrid coding, communications and structural health monitoring. This article firstly summarizes some key issues in CS, and then discusses the development process of the optimization algorithms in CS from the sparsity constraints to low-rank constraints. Lastly, several related applications of CS in remote sensing , seismic exploration are reviewed.
The density matrix renormalization group for ab initio quantum chemistry
Wouters, Sebastian
2014-01-01
During the past 15 years, the density matrix renormalization group (DMRG) has become increasingly important for ab initio quantum chemistry. Its underlying wavefunction ansatz, the matrix product state (MPS), is a low-rank decomposition of the full configuration interaction tensor. The virtual dimension of the MPS, the rank of the decomposition, controls the size of the corner of the many-body Hilbert space that can be reached with the ansatz. This parameter can be systematically increased until numerical convergence is reached. The MPS ansatz naturally captures exponentially decaying correlation functions. Therefore DMRG works extremely well for noncritical one-dimensional systems. The active orbital spaces in quantum chemistry are however often far from one-dimensional, and relatively large virtual dimensions are required to use DMRG for ab initio quantum chemistry (QC-DMRG). The QC-DMRG algorithm, its computational cost, and its properties are discussed. Two important aspects to reduce the computational co...
Random matrix theory, interacting particle systems and integrable systems
Forrester, Peter
2014-01-01
Random matrix theory is at the intersection of linear algebra, probability theory and integrable systems, and has a wide range of applications in physics, engineering, multivariate statistics and beyond. This volume is based on a Fall 2010 MSRI program which generated the solution of long-standing questions on universalities of Wigner matrices and beta-ensembles and opened new research directions especially in relation to the KPZ universality class of interacting particle systems and low-rank perturbations. The book contains review articles and research contributions on all these topics, in addition to other core aspects of random matrix theory such as integrability and free probability theory. It will give both established and new researchers insights into the most recent advances in the field and the connections among many subfields.
Huang, Li; Li, Xianhong; Guo, Pengfei; Yao, Yuhua; Liao, Bo; Zhang, Weiwei; Wang, Fayou; Yang, Jiasheng; Zhao, Yulong; Sun, Hailiang; He, Pingan; Yang, Jialiang
2017-06-16
Low-rank matrix completion has been demonstrated to be powerful in predicting antigenic distances among influenza viruses and vaccines from partially revealed hemagglutination inhibition (HI) table. Meanwhile, influenza hemagglutinin (HA) protein sequences are also effective in inferring antigenic distances. Thus, it is natural to integrate HA protein sequence information into low-rank matrix completion model to help infer influenza antigenicity, which is critical to influenza vaccine development. We have proposed a novel algorithm called biological matrix completion with side information (BMCSI), which first measures HA protein sequence similarities among influenza viruses (especially on epitopes) and then integrates the similarity information into a low-rank matrix completion model to predict influenza antigenicity. This algorithm exploits both the correlations among viruses and vaccines in serological tests and the power of HA sequence in predicting influenza antigenicity. We applied this model into H3N2 seasonal influenza virus data. Comparing to previous methods, we significantly reduced the prediction root-mean-square error in a 10-fold cross validation analysis. Based on the cartographies constructed from imputed data, we showed that the antigenic evolution of H3N2 seasonal influenza is generally S-shaped while the genetic evolution is half-circle shaped. We also showed that the Spearman correlation between genetic and antigenic distances (among antigenic clusters) is 0.83, demonstrating a globally high correspondence and some local discrepancies between influenza genetic and antigenic evolution. Finally, we showed that 4.4%±1.2% genetic variance (corresponding to 3.11±1.08 antigenic distances) caused an antigenic drift event for H3N2 influenza viruses historically. The software and data for this study are available at http://bi.sky.zstu.edu.cn/BMCSI /. : jialiang.yang@mssm.edu ; pinganhe@zstu.edu.cn. Supplementary data are available at Bioinformatics
A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering
Directory of Open Access Journals (Sweden)
Yubao Sun
2015-01-01
Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
Institute of Scientific and Technical Information of China (English)
钟梅; 高士秋; 张志凯; 岳君容; 许光文
2012-01-01
Physicochemical properties of char made by pyrolyzing a kind of low-rank particulate coal in O2-containing atmosphere and a quartz sand fluidized bed were investigated. The tested conditions included coal particle size, O2 concentration, pyrolysis temperature and residence time. The char with fixed carbon content over 82%(ω) and volatile matter below 7%(ω) was obtained by pyrolyzing the coal with particle size of i~13 mm at the temperatures above 850 ℃ in the atmosphere with O2 content not below 3%(φ) for pyrolysis time not shorter than 120 s. With the increase of temperature from 650℃ to 950'C, the corresponding interlayer spacing (d002) of the crystallite structure calculated from the XRD intensities was in the range from 0.383 to 0.372 nm, indicating a gradually condensed and ordered structure. BET specific surface area first increased then decreased with O2 concentration, and reached its maximum 242.71 m2/g at O2 7%(φ), corresponding to the highest oxidation reactivity of char. The char produced via long-time pyrolysis in the O2-contaning atmosphere lowered the specific surface area and reactivity due to the burn-off of some formed pores.%以流化石英砂为介质,研究了热解温度、O2浓度、原煤停留时间及粒径等流化条件对由小粒径低阶碎煤所制半焦的理化性质的影响.结果表明,1～13 mm小粒径低阶煤在热解温度大于850℃及O2浓度、热解时间分别不低于3％(φ)和120 s的条件下,可制得固定碳含量高于82％(ω)、挥发分含量低于7％(ω)的兰炭.热解温度由650℃升至950℃,碳晶格的微晶晶面间距(d002)由0.383 nm减小至0.372 nm,半焦晶格的有序化程度增加.氧气浓度为7％(φ)时,半焦的比表面积最大,为242.71 m2/g,同时氧化反应活性也最大.延长有氧气氛下的热解时间,半焦孔隙结构因烧蚀而坍塌,半焦的比表面积和活性降低.
Institute of Scientific and Technical Information of China (English)
李沛; 马东民; 张辉; 李卫波; 杨甫
2016-01-01
In order to study the influence of different ranks of coal wettability on the adsorption/desorption of coalbed methane(CBM), Sihe No.3 coal and Dafosi No.4coal samples were collected, which were used to perform the contact angle measurements and adsorption/desorption experiments, and analyzed the thermal characteristics of the isosteric adsorption heat through thermodynamic calculation. The results show that the water-wettability of Dafosi No.4 coal is far better than Sihe No.3 coal resulting from Dafosi No.4 coal is low metamorphic (CY) coal and the number of oxy-gen containing functional groups such as carboxyl group, hydroxyl group and so on. Meanwhile, Dafosi No. four coal has the hydrophilic property of the material component and the advantage of the development of the pore and crack. The coal moisture content is influenced by the wettability of coal, and which indirectly influences the adsorp-tion/desorption characteristics of CBM. The relationships among wettability, moisture content, desorption rate and recovery efficiency from Dafosi No. 4 coal and Sihe No.3 coal is complicated, which is restricted by the critical water content and there is a critical point of temperature. By comparing the isosteric adsorption heat, found that the hydro-philic property of coal is not conducive to the adsorption of CBM. CBM adsorption heat release is much less than de-sorption heat absorption, there is the significant desorption hysteresis, and it decreased with increasing adsorp-tion/desorption capacity and low rank coal’s weakening trend gets more obvious.%为研究不同煤阶煤体润湿性对煤层气吸附/解吸的影响，采集寺河煤矿3号煤与大佛寺煤矿4号煤样品进行接触角测量及吸附/解吸实验，通过热力学计算分析等量吸附热特征。结果表明，大佛寺4号煤水润湿性远好于寺河3号煤，原因在于大佛寺4号煤为低变质长焰煤，其所含羧基、羟基等含氧官能团数量较多，同时具有物
Fine Particles Formation During O2/CO2 Combustion of Low-rank Coal%低阶煤O2/CO2气氛下燃烧生成颗粒物的特性
Institute of Scientific and Technical Information of China (English)
汪应红; 王群英; 付晓恒
2012-01-01
选取了2种低阶煤在不同气氛下沉降炉中进行燃烧实验,产生的灰颗收集到粒旋风分离器和低压冲击器中,利用透射电子显微镜和扫描电镜分析亚微米颗粒和超微米颗粒的形态,利用扫描电镜能量色散谱仪联用,透射电子显微镜能量色散谱仪联用和计算机控制的扫描电镜分析灰颗粒的化学元素组成.研究结果表明:O2/CO2燃烧改变超细颗粒物的大小分布和灰中元素的浓度分布,但没有改变细颗粒的生成机制.对于含有更多有机结构矿物质的褐煤,O2/CO2燃烧提高了Fe、Na/K、A1和Si的气化程度,也因此增加了亚微颗粒的浓度,而且褐煤中的Fe元素的气化较为特殊,O2/CO2燃烧氧气浓度的增加提高了Fe气化后在其他粒子上的附着.%Two kinds of typical low-rank coal were burned in a drop tube furnace under different atmosphere. The produced particulate matters(PM) were collected by the cyclone and low pressure impactor(LPI). Transmission electron microscopy(TEM) and scanning electron microscope (SEM) were used to identify the morphology of the super-micron and submicron PM. SEM with an energy dispersive X-ray(SEM-EDX), TEM with an energy dispersive X-ray spectroscopy(TEM-EDS) and computer-controlled scanning electron microcopy(CCSEM) were applied to analyze the chemical composition in the ash. It has been confirmed that compared with air atmosphere, O2/CO2 atmosphere changes the generated fine ash particle size distribution and the concentration of the chemical composition, but the formation mechanisms are same for different atmosphere. For the case of the lignite with more amount of organic-associated minerals, O2/CO2 atmosphere increase the vaporization degree of Fe, Na/K sulfates, Al and Si, and thus increase the concentration of sub-micron particles. Also Fe is found to be a key element in the lignite. Oxy-fuel-combustion increases the attachment of vaporized Fe element to other particle.
Bodewig, E
1959-01-01
Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well
Link Prediction via Convex Nonnegative Matrix Factorization on Multiscale Blocks
Directory of Open Access Journals (Sweden)
Enming Dong
2014-01-01
Full Text Available Low rank matrices approximations have been used in link prediction for networks, which are usually global optimal methods and lack of using the local information. The block structure is a significant local feature of matrices: entities in the same block have similar values, which implies that links are more likely to be found within dense blocks. We use this insight to give a probabilistic latent variable model for finding missing links by convex nonnegative matrix factorization with block detection. The experiments show that this method gives better prediction accuracy than original method alone. Different from the original low rank matrices approximations methods for link prediction, the sparseness of solutions is in accord with the sparse property for most real complex networks. Scaling to massive size network, we use the block information mapping matrices onto distributed architectures and give a divide-and-conquer prediction method. The experiments show that it gives better results than common neighbors method when the networks have a large number of missing links.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Groundwater recharge: Accurately representing evapotranspiration
CSIR Research Space (South Africa)
Bugan, Richard DH
2011-09-01
Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...
Craps, Ben; Nguyen, Kévin
2016-01-01
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Zhan, Xingzhi
2002-01-01
The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.
Accurate structural correlations from maximum likelihood superpositions.
Directory of Open Access Journals (Sweden)
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
Negahban, Sahand
2010-01-01
We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an $M$-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with $\\ell_q$-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algo...
Yang, Jian; Luo, Lei; Qian, Jianjun; Tai, Ying; Zhang, Fanlong; Xu, Yong
2017-01-01
Recently, regression analysis has become a popular tool for face recognition. Most existing regression methods use the one-dimensional, pixel-based error model, which characterizes the representation error individually, pixel by pixel, and thus neglects the two-dimensional structure of the error image. We observe that occlusion and illumination changes generally lead, approximately, to a low-rank error image. In order to make use of this low-rank structural information, this paper presents a two-dimensional image-matrix-based error model, namely, nuclear norm based matrix regression (NMR), for face representation and classification. NMR uses the minimal nuclear norm of representation error image as a criterion, and the alternating direction method of multipliers (ADMM) to calculate the regression coefficients. We further develop a fast ADMM algorithm to solve the approximate NMR model and show it has a quadratic rate of convergence. We experiment using five popular face image databases: the Extended Yale B, AR, EURECOM, Multi-PIE and FRGC. Experimental results demonstrate the performance advantage of NMR over the state-of-the-art regression-based methods for face recognition in the presence of occlusion and illumination variations.
Bhatia, Rajendra
1997-01-01
A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...
Matrix Factorisation-based Calibration For Air Quality Crowd-sensing
Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle
2017-04-01
sensors share some information using the APISENSE® crowdsensing platform and we aim to calibrate the sensor responses from the data directly. For that purpose, we express the sensor readings as a low-rank matrix with missing entries and we revisit self-calibration as a Matrix Factorization (MF) problem. In our proposed framework, one factor matrix contains the calibration parameters while the other is structured by the calibration model and contains some values of the sensed phenomenon. The MF calibration approach also uses the precise measurements from ATMO—the French public institution—to drive the calibration of the mobile sensors. MF calibration can be improved using, e.g., the mean calibration parameters provided by the sensor manufacturers, or using sparse priors or a model of the physical phenomenon. All our approaches are shown to provide a better calibration accuracy than matrix-completion-based and robust-regression-based methods, even in difficult scenarios involving a lot of missing data and/or very few accurate references. When combined with a dictionary of air quality patterns, our experiments suggest that MF is not only able to perform sensor network calibration but also to provide detailed maps of air quality.
NNLOPS accurate associated HW production
Astill, William; Re, Emanuele; Zanderighi, Giulia
2016-01-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.
Belitsky, A V
2016-01-01
The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multiparticle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unravelled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
Accurate renormalization group analyses in neutrino sector
Energy Technology Data Exchange (ETDEWEB)
Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)
2014-08-15
We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.
Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2012-01-01
Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.
Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions
Li, Xiaodong
2011-01-01
We improve existing results in the field of compressed sensing and matrix completion when sampled data may be grossly corrupted. We introduce three new theorems. 1) In compressed sensing, we show that if the m \\times n sensing matrix has independent Gaussian entries, then one can recover a sparse signal x exactly by tractable \\ell1 minimimization even if a positive fraction of the measurements are arbitrarily corrupted, provided the number of nonzero entries in x is O(m/(log(n/m) + 1)). 2) In the very general sensing model introduced in "A probabilistic and RIPless theory of compressed sensing" by Candes and Plan, and assuming a positive fraction of corrupted measurements, exact recovery still holds if the signal now has O(m/(log^2 n)) nonzero entries. 3) Finally, we prove that one can recover an n \\times n low-rank matrix from m corrupted sampled entries by tractable optimization provided the rank is on the order of O(m/(n log^2 n)); again, this holds when there is a positive fraction of corrupted samples.
Song Recommendation with Non-Negative Matrix Factorization and Graph Total Variation
Benzi, Kirell; Bresson, Xavier; Vandergheynst, Pierre
2016-01-01
This work formulates a novel song recommender system as a matrix completion problem that benefits from collaborative filtering through Non-negative Matrix Factorization (NMF) and content-based filtering via total variation (TV) on graphs. The graphs encode both playlist proximity information and song similarity, using a rich combination of audio, meta-data and social features. As we demonstrate, our hybrid recommendation system is very versatile and incorporates several well-known methods while outperforming them. Particularly, we show on real-world data that our model overcomes w.r.t. two evaluation metrics the recommendation of models solely based on low-rank information, graph-based information or a combination of both.
Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach
Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun
2015-02-01
The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.
Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs
Charara, Ali
2017-03-06
Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.
Kargın, Levent; Kurt, Veli
2015-01-01
In this study, obtaining the matrix analog of the Euler's reflection formula for the classical gamma function we expand the domain of the gamma matrix function and give a infinite product expansion of sinπxP. Furthermore we define Riemann zeta matrix function and evaluate some other matrix integrals. We prove a functional equation for Riemann zeta matrix function.
Evaluation of elemental sulphur in biodesulphurized low rank coals
Energy Technology Data Exchange (ETDEWEB)
L. Gonsalvesh; S.P. Marinov; M. Stefanova; R. Carleer; J. Yperman [Bulgarian Academy of Sciences, Sofia (Bulgaria). Institute of Organic Chemistry
2011-09-15
A new procedure for elemental sulphur (S{sup el}) determination in coal and its fractions is offered. It includes exhaustive CHCl{sub 3} extraction and subsequent quantitative analysis of the extracts by HPLC using C{sub 18} reversed phase column. Its application gives ground to achieve better sulphur balance and to specify the changes in the organic and elemental sulphur as a result of biotreatments. Two Bulgarian high sulphur containing coal samples, i.e. subbituminious (Pirin) and lignite (Maritza East), and one Turkish lignite (Cayirhan-Beypazari) are used. Prior to biotreatments, the samples are demineralized and depyritized. In the biodesulphurization processes, the applied microorganisms are: the white rot fungi 'Phanerochaeta Chrysosporium' - ME446 and the thermophilic and acidophilic archae 'Sulfolobus Solfataricus' - ATCC 35091. In the preliminary demineralized and depyritized coals, the highest presence of S{sup el} is registered, which is explained by their natural weathering. As a result of the implemented biotreatments, the amount of S{sup el} could be reduced in the range of 16.1-53.8%. The content of S{sub el} is also assessed as part of the total sulphur and organic sulphur. The following range of S{sup el} content is measured: 0.01-0.16 wt.% or 0.3-4.6% of total sulphur and 0.3-5.1% of organic sulphur. In this way, more precise information is obtained concerning the content of organic sulphur presence. 31 refs., 4 figs., 6 tabs.
Low Rank Approximation in $G_0W_0$ Approximation
Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G
2016-01-01
The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...
Moving target imaging using sparse and low-rank structure
Mason, Eric; Yazici, Birsen
2016-05-01
In this paper we present a method for passive radar detection of ground moving targets using sparsely distributed apertures. We assume the scene is illuminated by a source of opportunity and measure the backscattered signal. We correlate measurements from two different receivers, then form a linear forward model that operates on a rank one, positive semi-definite (PSD) operator, formed by taking the tensor product of the phase-space reflectivity function with its self. Utilizing this structure, image formation and velocity estimation are defined in a constrained optimization framework. Additionally, image formation and velocity estimation are formulated as separate optimization problems, this results in computational savings. Position estimation is posed as a rank one PSD constrained least squares problem. Then, velocity estimation is performed as a cardinality constrained least squares problem, solved using a greedy algorithm. We demonstrate the performance of our method with numerical simulations, demonstrate improvement over back-projection imaging, and evaluate the effect of spatial diversity.
Analysis of linear dynamic systems of low rank
DEFF Research Database (Denmark)
Reinikainen, S.P.; Aaljoki, K.; Høskuldsson, Agnar
2005-01-01
to carry out graphic analysis of the dynamic systems. It is shown how score vectors can display the low dimensional variation in data, the loading vectors display the correlation structure, and the transformation vectors how the variables generate the resulting variation in data; these graphic analysis...... and prediction part of the model. The approximations stop, when the prediction ability of the model cannot be improved for the present data. Therefore, the present methods give better prediction results than traditional methods that are based on exact solutions. The vectors used in the approximations can be used...... have proven their importance in traditional chemometric methods. These graphics methods are important in supervising and controlling the process in light of the variation in data. The algorithms can provide with solutions of models having hundreds or thousands of variables. It is shown here how...
Analysis of linear dynamic systems of low rank
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2003-01-01
dimensional variation in data, the loading vectors display the correlation structure and the transformation (causal) vectors how the variables generate the resulting variation in data. These graphics methods are important in supervising and controlling the process in light of the variation in data....... cannot be improved for the present data. Therefore, the present methods give better prediction results than traditional methods that give exact solutions. The vectors used in the approximations can be used to carry out graphic analysis of the dynamic systems. We show how score vectors can display the low...
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
A case study of PFBC for low rank coals
Energy Technology Data Exchange (ETDEWEB)
Jansson, S.A. [ABB Carbon AB, Finspong (Sweden)
1995-12-01
Pressurized Fluidized Combined-Cycle (PFBC) technology allows the efficient and environmentally friendly utilization of solid fuels for power and combined heat and power generation. With current PFBC technology, thermal efficiencies near 46%, on an LHV basis and with low condenser pressures, can be reached in condensing power plants. Further efficiency improvements to 50% or more are possible. PFBC plants are characterized by high thermal efficiency, compactness, and extremely good environmental performance. The PFBC plants which are now in operation in Sweden, the U.S. and Japan burn medium-ash, bituminous coal with sulfur contents ranging from 0.7 to 4%. A sub- bituminous {open_quotes}black lignite{close_quotes} with high levels of sulfur, ash and humidity, is used as fuel in a demonstration PFBC plant in Spain. Project discussions are underway, among others in Central and Eastern Europe, for the construction of PFBC plants which will burn lignite, oil-shale and also mixtures of coal and biomass with high efficiency and extremely low emissions. This paper will provide information about the performance data for PFBC plants when operating on a range of low grade coals and other solid fuels, and will summarize other advantages of this leading new clean coal technology.
Case studies on direct liquefaction of low rank Wyoming coal
Energy Technology Data Exchange (ETDEWEB)
Adler, P.; Kramer, S.J.; Poddar, S.K. [Bechtel Corp., San Francisco, CA (United States)
1995-12-31
Previous Studies have developed process designs, costs, and economics for the direct liquefaction of Illinois No. 6 and Wyoming Black Thunder coals at mine-mouth plants. This investigation concerns two case studies related to the liquefaction of Wyoming Black Thunder coal. The first study showed that reducing the coal liquefaction reactor design pressure from 3300 to 1000 psig could reduce the crude oil equivalent price by 2.1 $/bbl provided equivalent performing catalysts can be developed. The second one showed that incentives may exist for locating a facility that liquifies Wyoming coal on the Gulf Coast because of lower construction costs and higher labor productivity. These incentives are dependent upon the relative values of the cost of shipping the coal to the Gulf Coast and the increased product revenues that may be obtained by distributing the liquid products among several nearby refineries.
Efficient and accurate fragmentation methods.
Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S
2014-09-16
Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
High Resolution Turntable Radar Imaging via Two Dimensional Deconvolution with Matrix Completion
Lu, Xinfei; Xia, Jie; Yin, Zhiping; Chen, Weidong
2017-01-01
Resolution is the bottleneck for the application of radar imaging, which is limited by the bandwidth for the range dimension and synthetic aperture for the cross-range dimension. The demand for high azimuth resolution inevitably results in a large amount of cross-range samplings, which always need a large number of transmit-receive channels or a long observation time. Compressive sensing (CS)-based methods could be used to reduce the samples, but suffer from the difficulty of designing the measurement matrix, and they are not robust enough in practical application. In this paper, based on the two-dimensional (2D) convolution model of the echo after matched filter (MF), we propose a novel 2D deconvolution algorithm for turntable radar to improve the radar imaging resolution. Additionally, in order to reduce the cross-range samples, we introduce a new matrix completion (MC) algorithm based on the hyperbolic tangent constraint to improve the performance of MC with undersampled data. Besides, we present a new way of echo matrix reconstruction for the situation that only partial cross-range data are observed and some columns of the echo matrix are missing. The new matrix has a better low rank property and needs just one operation of MC for all of the missing elements compared to the existing ways. Numerical simulations and experiments are carried out to demonstrate the effectiveness of the proposed method. PMID:28282904
Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units
Boukaram, W.
2015-03-25
Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.
Flores, Cintia; Caixach, Josep
2015-08-14
An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs.
The Accurate Particle Tracer Code
Wang, Yulei; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Vibrational Density Matrix Renormalization Group.
Baiardi, Alberto; Stein, Christopher J; Barone, Vincenzo; Reiher, Markus
2017-08-08
Variational approaches for the calculation of vibrational wave functions and energies are a natural route to obtain highly accurate results with controllable errors. Here, we demonstrate how the density matrix renormalization group (DMRG) can be exploited to optimize vibrational wave functions (vDMRG) expressed as matrix product states. We study the convergence of these calculations with respect to the size of the local basis of each mode, the number of renormalized block states, and the number of DMRG sweeps required. We demonstrate the high accuracy achieved by vDMRG for small molecules that were intensively studied in the literature. We then proceed to show that the complete fingerprint region of the sarcosyn-glycin dipeptide can be calculated with vDMRG.
Comparison of transition-matrix sampling procedures
DEFF Research Database (Denmark)
Yevick, D.; Reimer, M.; Tromborg, Bjarne
2009-01-01
We compare the accuracy of the multicanonical procedure with that of transition-matrix models of static and dynamic communication system properties incorporating different acceptance rules. We find that for appropriate ranges of the underlying numerical parameters, algorithmically simple yet high...... accurate procedures can be employed in place of the standard multicanonical sampling algorithm....
On affine non-negative matrix factorization
DEFF Research Database (Denmark)
Laurberg, Hans; Hansen, Lars Kai
2007-01-01
We generalize the non-negative matrix factorization (NMF) generative model to incorporate an explicit offset. Multiplicative estimation algorithms are provided for the resulting sparse affine NMF model. We show that the affine model has improved uniqueness properties and leads to more accurate...
Analytic Matrix Method for the Study of Propagation Characteristics of a Bent Planar Waveguide
Institute of Scientific and Technical Information of China (English)
LIU Qing; CAO Zhuang-Qi; SHEN Qi-Shun; DOU Xiao-Ming; CHEN Ying-Li
2000-01-01
An analytic matrix method is used to analyze and accurately calculate the propagation constant and bendinglosses of a bent planar waveguide. This method gives not only a dispersion equation with explicit physical insight,but also accurate complex propagation constants.
A More Accurate Fourier Transform
Courtney, Elya
2015-01-01
Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...
DEFF Research Database (Denmark)
Petersen, Kaare Brandt; Pedersen, Michael Syskind
Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....
Matrix with Prescribed Eigenvectors
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
Institute of Scientific and Technical Information of China (English)
彭淑娟; 赫高峰; 柳欣; 王华珍; 钟必能
2015-01-01
针对人体运动的复杂性和噪声干扰的无序性，提出一种基于运动分割和稀疏低秩分解的失真人体运动捕捉数据恢复方法。首先利用双边滤波对失真运动数据进行预修正，降低干扰数据的奇异信息并保证运动序列的连贯性；其次采用概率主元分析方法将修正后的运动数据进行语义行为自动分割，得到不同姿态的运动语义子区间；再利用加速近端梯度优化算法对每个失真运动子片段数据矩阵根据其更优低秩特性进行稀疏低秩分解，实现运动子片段数据的局部恢复；最后将局部恢复后的各子运动片段根据人体运动序列的时序特性组合，达到整体失真运动捕捉数据恢复的目的。实验结果表明，该方法能够有效地对失真人体运动数据进行恢复，效果显著，有助于重构逼近真实人体姿态的运动捕捉数据。%According to the complexity of human movement and randomness of the noise interference, this paper presents a motion segmentation based approach for human motion capture data recovery via the sparse and low-rank decomposition. The proposed approach first employs the bilateral filter to amend the distorted hu-man motion capture data, featuring on removing singular values and smoothing the motion sequence. Then, the probabilistic principal component analysis (PPCA) method is utilized to segment the motion data into different semantic behaviors automatically. Subsequently, the accelerated proximal gradient algorithm (APG) based sparse and low-rank decomposition is adopted to achieve the partial data recovery with respected to each separated se-mantic behavior. Finally, all the recovered sub-motions are sequentially combined to achieve the whole motion recovery. The experimental results have shown that the proposed motion recovery approach can well restore the distorted human motion data with better performance. The proposed approach can be well utilized to
38 CFR 4.46 - Accurate measurement.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Technique to accurately quantify collagen content in hyperconfluent cell culture.
See, Eugene Yong-Shun; Toh, Siew Lok; Goh, James Cho Hong
2008-12-01
Tissue engineering aims to regenerate tissues that can successfully take over the functions of the native tissue when it is damaged or diseased. In most tissues, collagen makes up the bulk component of the extracellular matrix, thus, there is great emphasis on its accurate quantification in tissue engineering. It has already been reported that pepsin digestion is able to solubilize the collagen deposited within the cell layer for accurate quantification of collagen content in cultures, but this method has drawbacks when cultured cells are hyperconfluent. In this condition, Pepsin digestion will result in fragments of the cell layers that cannot be completely resolved. These fragments of the undigested cell sheet are visible to the naked eye, which can bias the final results. To the best of our knowledge, there has been no reported method to accurately quantify the collagen content in hyperconfluent cell sheet. Therefore, this study aims to illustrate that sonication is able to aid pepsin digestion of hyperconfluent cell layers of fibroblasts and bone marrow mesenchymal stem cells, to solubilize all the collagen for accurate quantification purposes.
Accurate segmentation of dense nanoparticles by partially discrete electron tomography
Energy Technology Data Exchange (ETDEWEB)
Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)
2012-03-15
Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.
Demoor, M; Maneix, L; Ollitrault, D; Legendre, F; Duval, E; Claus, S; Mallein-Gerin, F; Moslemi, S; Boumediene, K; Galera, P
2012-06-01
Since the emergence in the 1990s of the autologous chondrocytes transplantation (ACT) in the treatment of cartilage defects, the technique, corresponding initially to implantation of chondrocytes, previously isolated and amplified in vitro, under a periosteal membrane, has greatly evolved. Indeed, the first generations of ACT showed their limits, with in particular the dedifferentiation of chondrocytes during the monolayer culture, inducing the synthesis of fibroblastic collagens, notably type I collagen to the detriment of type II collagen. Beyond the clinical aspect with its encouraging results, new biological substitutes must be tested to obtain a hyaline neocartilage. Therefore, the use of differentiated chondrocytes phenotypically stabilized is essential for the success of ACT at medium and long-term. That is why researchers try now to develop more reliable culture techniques, using among others, new types of biomaterials and molecules known for their chondrogenic activity, giving rise to the 4th generation of ACT. Other sources of cells, being able to follow chondrogenesis program, are also studied. The success of the cartilage regenerative medicine is based on the phenotypic status of the chondrocyte and on one of its essential component of the cartilage, type II collagen, the expression of which should be supported without induction of type I collagen. The knowledge accumulated by the scientific community and the experience of the clinicians will certainly allow to relief this technological challenge, which influence besides, the validation of such biological substitutes by the sanitary authorities. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Matrix-bound phosphine was determined in the Jiaozhou Bay coastal sediment, in prawn-pond bottom soil, in the eutrophic lake Wulongtan, in the sewage sludge and in paddy soil as well. Results showed that matrix-bound phosphine levels in freshwater and coastal sediment, as well as in sewage sludge, are significantly higher than that in paddy soil. The correlation between matrix bound phosphine concentrations and organic phosphorus contents in sediment samples is discussed.
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Directory of Open Access Journals (Sweden)
Thu L. N. Nguyen
2016-05-01
Full Text Available Localization in wireless sensor networks (WSNs is one of the primary functions of the intelligent Internet of Things (IoT that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach.
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT.
Nguyen, Thu L N; Shin, Yoan
2016-05-18
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton's method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach.
Chrétien, Stéphane; Guyeux, Christophe; Conesa, Bastien; Delage-Mouroux, Régis; Jouvenot, Michèle; Huetz, Philippe; Descôtes, Françoise
2016-08-31
Non-Negative Matrix factorization has become an essential tool for feature extraction in a wide spectrum of applications. In the present work, our objective is to extend the applicability of the method to the case of missing and/or corrupted data due to outliers. An essential property for missing data imputation and detection of outliers is that the uncorrupted data matrix is low rank, i.e. has only a small number of degrees of freedom. We devise a new version of the Bregman proximal idea which preserves nonnegativity and mix it with the Augmented Lagrangian approach for simultaneous reconstruction of the features of interest and detection of the outliers using a sparsity promoting ℓ 1 penality. An application to the analysis of gene expression data of patients with bladder cancer is finally proposed.
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.;
2013-01-01
For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.
2013-01-01
For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...
Seraji, H.
1987-01-01
Given a multivariable system, it is proved that the numerator matrix N(s) of the transfer function evaluated at any system pole either has unity rank or is a null matrix. It is also shown that N(s) evaluated at any transmission zero of the system has rank deficiency. Examples are given for illustration.
Directory of Open Access Journals (Sweden)
Ruben Anillo-Correa
2013-01-01
Full Text Available Low-rank coals are an important source of humic acids, which are important in retention processes of water and nutrients in plants. In this study coal samples of Montelibano, Colombia, were oxidized with air at different temperatures and subsequently with H2O2 and HNO3. The materials were characterized by FTIR, proximate and elemental analysis, and quantification of humic acids. The oxidation process led to an increased content of oxygenated groups and humic acids in the carbonaceous structure. The solid oxidized with air at 200 ºC for 12 h and re-oxidized with HNO3 for 12 h showed the highest percentage of humic acids (85.3%.
Yao, Y. X.; Liu, J.; Liu, C.; Lu, W. C.; Wang, C. Z.; Ho, K. M.
2015-08-01
We present an efficient method for calculating the electronic structure and total energy of strongly correlated electron systems. The method extends the traditional Gutzwiller approximation for one-particle operators to the evaluation of the expectation values of two particle operators in the many-electron Hamiltonian. The method is free of adjustable Coulomb parameters, and has no double counting issues in the calculation of total energy, and has the correct atomic limit. We demonstrate that the method describes well the bonding and dissociation behaviors of the hydrogen and nitrogen clusters, as well as the ammonia composed of hydrogen and nitrogen atoms. We also show that the method can satisfactorily tackle great challenging problems faced by the density functional theory recently discussed in the literature. The computational workload of our method is similar to the Hartree-Fock approach while the results are comparable to high-level quantum chemistry calculations.
Improved Matrix Uncertainty Selector
Rosenbaum, Mathieu
2011-01-01
We consider the regression model with observation error in the design: y=X\\theta* + e, Z=X+N. Here the random vector y in R^n and the random n*p matrix Z are observed, the n*p matrix X is unknown, N is an n*p random noise matrix, e in R^n is a random noise vector, and \\theta* is a vector of unknown parameters to be estimated. We consider the setting where the dimension p can be much larger than the sample size n and \\theta* is sparse. Because of the presence of the noise matrix N, the commonly used Lasso and Dantzig selector are unstable. An alternative procedure called the Matrix Uncertainty (MU) selector has been proposed in Rosenbaum and Tsybakov (2010) in order to account for the noise. The properties of the MU selector have been studied in Rosenbaum and Tsybakov (2010) for sparse \\theta* under the assumption that the noise matrix N is deterministic and its values are small. In this paper, we propose a modification of the MU selector when N is a random matrix with zero-mean entries having the variances th...
Eves, Howard
1980-01-01
The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri
Rheocasting Al matrix composites
Energy Technology Data Exchange (ETDEWEB)
Girot, F.A.; Albingre, L.; Quenisset, J.M.; Naslain, R.
1987-11-01
A development status account is given for the rheocasting method of Al-alloy matrix/SiC-whisker composites, which involves the incorporation and homogeneous distribution of 8-15 vol pct of whiskers through the stirring of the semisolid matrix melt while retaining sufficient fluidity for casting. Both 1-, 3-, and 6-mm fibers of Nicalon SiC and and SiC whisker reinforcements have been experimentally investigated, with attention to the characterization of the resulting microstructures and the effects of fiber-matrix interactions. A thin silica layer is found at the whisker surface. 7 references.
Mueller matrix differential decomposition.
Ortega-Quijano, Noé; Arce-Diego, José Luis
2011-05-15
We present a Mueller matrix decomposition based on the differential formulation of the Mueller calculus. The differential Mueller matrix is obtained from the macroscopic matrix through an eigenanalysis. It is subsequently resolved into the complete set of 16 differential matrices that correspond to the basic types of optical behavior for depolarizing anisotropic media. The method is successfully applied to the polarimetric analysis of several samples. The differential parameters enable one to perform an exhaustive characterization of anisotropy and depolarization. This decomposition is particularly appropriate for studying media in which several polarization effects take place simultaneously. © 2011 Optical Society of America
The "Pesticide-exposure Matrix" was developed to help epidemiologists and other researchers identify the active ingredients to which people were likely exposed when their homes and gardens were treated for pests in past years.
Koehler, Wolfgang
2011-01-01
A new classical theory of gravitation within the framework of general relativity is presented. It is based on a matrix formulation of four-dimensional Riemann-spaces and uses no artificial fields or adjustable parameters. The geometrical stress-energy tensor is derived from a matrix-trace Lagrangian, which is not equivalent to the curvature scalar R. To enable a direct comparison with the Einstein-theory a tetrad formalism is utilized, which shows similarities to teleparallel gravitation theories, but uses complex tetrads. Matrix theory might solve a 27-year-old, fundamental problem of those theories (sec. 4.1). For the standard test cases (PPN scheme, Schwarzschild-solution) no differences to the Einstein-theory are found. However, the matrix theory exhibits novel, interesting vacuum solutions.
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg; Borlund, Pia
2007-01-01
The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing...... such comparisons, matrix generation, and the composition of proximity measures, are introduced and discussed. In this second part, the authors introduce and thoroughly demonstrate two related matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare...... important. Alternatively, or as a supplement, Procrustes analysis compares the actual ordination results without investigating the underlying proximity measures, by matching two configurations of the same objects in a multidimensional space. An advantage of the Procrustes analysis though, is the graphical...
The Matrix Organization Revisited
DEFF Research Database (Denmark)
Gattiker, Urs E.; Ulhøi, John Parm
1999-01-01
This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....
Optical Coherency Matrix Tomography
2015-10-19
optics has been studied theoretically11, but has not been demonstrated experimentally heretofore. Even in the simplest case of two binary DoFs6 (e.g...coherency matrix G spanning these DoFs. This optical coherency matrix has not been measured in its entirety to date—even in the simplest case of two...dense coding, etc. CREOL, The College of Optics & Photonics, University of Central Florida, Orlando , Florida 32816, USA. Correspondence and requests
Tenreiro Machado, J. A.
2015-08-01
This paper addresses the matrix representation of dynamical systems in the perspective of fractional calculus. Fractional elements and fractional systems are interpreted under the light of the classical Cole-Cole, Davidson-Cole, and Havriliak-Negami heuristic models. Numerical simulations for an electrical circuit enlighten the results for matrix based models and high fractional orders. The conclusions clarify the distinction between fractional elements and fractional systems.
Czerwinski, Michael; Spence, Jason R
2017-01-05
Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.
The Matrix Organization Revisited
DEFF Research Database (Denmark)
Gattiker, Urs E.; Ulhøi, John Parm
1999-01-01
This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....
Laboratory Building for Accurate Determination of Plutonium
Institute of Scientific and Technical Information of China (English)
2008-01-01
<正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An
Understanding the Code: keeping accurate records.
Griffith, Richard
2015-10-01
In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.
Accurate Element of Compressive Bar considering the Effect of Displacement
Directory of Open Access Journals (Sweden)
Lifeng Tang
2015-01-01
Full Text Available By constructing the compressive bar element and developing the stiffness matrix, most issues about the compressive bar can be solved. In this paper, based on second derivative to the equilibrium differential governing equations, the displacement shape functions are got. And then the finite element formula of compressive bar element is developed by using the potential energy principle and analytical shape function. Based on the total potential energy variation principle, the static and geometrical stiffness matrices are proposed, in which the large deformation of compressive bar is considered. To verify the accurate and validity of the analytical trial function element proposed in this paper, a number of the numerical examples are presented. Comparisons show that the proposed element has high calculation efficiency and rapid speed of convergence.
Estimating the mixing matrix by using less sparsity
Institute of Scientific and Technical Information of China (English)
Guoxu Zhou; Zuyuan Yang; Xiaoxin Liao; Jinlong Zhang
2009-01-01
In this paper, the nonlinear projection and column masking (NPCM) algorithm is proposed to estimate the mixing matrix for blind source separation. It preserves the samples which are close to the interested direction while suppressing the rest. Compared with the exist-ing approaches, NPCM works efficiently even if the sources are less sparse (i.e., they are not strictly sparse). Finally, we show that NPCM provides considerably accurate estimation of the mixing matrix by simulations.
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-01
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.
Accurate mass and velocity functions of dark matter haloes
Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly
2017-08-01
N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.
Bhatia, Rajendra
2013-01-01
This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR). During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.
Accurate Interatomic Force Fields via Machine Learning with Covariant Kernels
Glielmo, Aldo; De Vita, Alessandro
2016-01-01
We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian Process (GP) Regression. This is based on matrix-valued kernel functions, to which we impose that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such "covariant" GP kernels can be obtained by integration over the elements of the rotation group SO(d) for the relevant dimensionality d. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni and Fe crystalline...
Accurate interatomic force fields via machine learning with covariant kernels
Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro
2017-06-01
We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.
Identification of dynamic stiffness matrix of bearing joint region
Institute of Scientific and Technical Information of China (English)
Feng HU; Bo WU; Youmin HU; Tielin SHI
2009-01-01
The paper proposes an identification method of the dynamic stiffness matrix of a bearing joint region on the basis of theoretical analysis and experiments. The author deduces an identification model of the dynamic stiffness matrix from the synthetic substructure method. The dynamic stiffness matrix of the bearing joint region can be identified by measuring the matrix of frequency response function (FRFs) of the substructure (axle) and whole structure (assembly of the axle, bearing, and bearing housing) in different positions. Considering difficulty in measuring angular displacement, applying moment, and directly measuring relevant FRFs of rotational degree of freedom, the author employs an accurately calibrated finite element model of the unconstrained structure for indirect estimation. With experiments and simulation analysis, FRFs related with translational degree of freedom, which is estimated through the finite element model, agrees with experimental results, and there is very high reliability in the identified dynamic stiffness matrix of the bearing joint region.
Directory of Open Access Journals (Sweden)
J. Mukerji
1993-10-01
Full Text Available The present state of the knowledge of ceramic-matrix composites have been reviewed. The fracture toughness of present structural ceramics are not enough to permit design of high performance machines with ceramic parts. They also fail by catastrophic brittle fracture. It is generally believed that further improvement of fracture toughness is only possible by making composites of ceramics with ceramic fibre, particulate or platelets. Only ceramic-matrix composites capable of working above 1000 degree centigrade has been dealt with keeping reinforced plastics and metal-reinforced ceramics outside the purview. The author has discussed the basic mechanisms of toughening and fabrication of composites and the difficulties involved. Properties of available fibres and whiskers have been given. The best results obtained so far have been indicated. The limitations of improvement in properties of ceramic-matrix composites have been discussed.
Energy Technology Data Exchange (ETDEWEB)
Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory
2010-01-01
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.
Extracellular matrix structure.
Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K
2016-02-01
Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented.
Finite Temperature Matrix Theory
Meana, M L; Peñalba, J P; Meana, Marco Laucelli; Peñalba, Jesús Puente
1998-01-01
We present the way the Lorentz invariant canonical partition function for Matrix Theory as a light-cone formulation of M-theory can be computed. We explicitly show how when the eleventh dimension is decompactified, the N=1 eleven dimensional SUGRA partition function appears. From this particular analysis we also clarify the question about the discernibility problem when making statistics with supergravitons (the N! problem) in Matrix black hole configurations. We also provide a high temperature expansion which captures some structure of the canonical partition function when interactions amongst D-particles are on. The connection with the semi-classical computations thermalizing the open superstrings attached to a D-particle is also clarified through a Born-Oppenheimer approximation. Some ideas about how Matrix Theory would describe the complementary degrees of freedom of the massless content of eleven dimensional SUGRA are also discussed.
Matrixed business support comparison study.
Energy Technology Data Exchange (ETDEWEB)
Parsons, Josh D.
2004-11-01
The Matrixed Business Support Comparison Study reviewed the current matrixed Chief Financial Officer (CFO) division staff models at Sandia National Laboratories. There were two primary drivers of this analysis: (1) the increasing number of financial staff matrixed to mission customers and (2) the desire to further understand the matrix process and the opportunities and challenges it creates.
Aoki, H; Kawai, H; Kitazawa, Y; Tada, T; Tsuchiya, A
1999-01-01
We review our proposal for a constructive definition of superstring, type IIB matrix model. The IIB matrix model is a manifestly covariant model for space-time and matter which possesses N=2 supersymmetry in ten dimensions. We refine our arguments to reproduce string perturbation theory based on the loop equations. We emphasize that the space-time is dynamically determined from the eigenvalue distributions of the matrices. We also explain how matter, gauge fields and gravitation appear as fluctuations around dynamically determined space-time.
Kitazawa, Y; Saito, O; Kitazawa, Yoshihisa; Mizoguchi, Shun'ya; Saito, Osamu
2006-01-01
We study the zero-dimensional reduced model of D=6 pure super Yang-Mills theory and argue that the large N limit describes the (2,0) Little String Theory. The one-loop effective action shows that the force exerted between two diagonal blocks of matrices behaves as 1/r^4, implying a six-dimensional spacetime. We also observe that it is due to non-gravitational interactions. We construct wave functions and vertex operators which realize the D=6, (2,0) tensor representation. We also comment on other "little" analogues of the IIB matrix model and Matrix Theory with less supercharges.
Hohn, Franz E
2012-01-01
This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur
Rheocasting Al Matrix Composites
Girot, F. A.; Albingre, L.; Quenisset, J. M.; Naslain, R.
1987-11-01
Aluminum alloy matrix composites reinforced by SiC short fibers (or whiskers) can be prepared by rheocasting, a process which consists of the incorporation and homogeneous distribution of the reinforcement by stirring within a semi-solid alloy. Using this technique, composites containing fiber volume fractions in the range of 8-15%, have been obtained for various fibers lengths (i.e., 1 mm, 3 mm and 6 mm for SiC fibers). This paper attempts to delineate the best compocasting conditions for aluminum matrix composites reinforced by short SiC (e.g Nicalon) or SiC whiskers (e.g., Tokamax) and characterize the resulting microstructures.
Frahm, K M
2016-01-01
Using parallels with the quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos, we construct a reduced Google matrix $G_R$ which describes the properties and interactions of a certain subset of selected nodes belonging to a much larger directed network. The matrix $G_R$ takes into account effective interactions between subset nodes by all their indirect links via the whole network. We argue that this approach gives new possibilities to analyze effective interactions in a group of nodes embedded in a large directed networks. Possible efficient numerical methods for the practical computation of $G_R$ are also described.
Density matrix perturbation theory.
Niklasson, Anders M N; Challacombe, Matt
2004-05-14
An orbital-free quantum perturbation theory is proposed. It gives the response of the density matrix upon variation of the Hamiltonian by quadratically convergent recursions based on perturbed projections. The technique allows treatment of embedded quantum subsystems with a computational cost scaling linearly with the size of the perturbed region, O(N(pert.)), and as O(1) with the total system size. The method allows efficient high order perturbation expansions, as demonstrated with an example involving a 10th order expansion. Density matrix analogs of Wigner's 2n+1 rule are also presented.
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Accurate tracking control in LOM application
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.
Stable Principal Component Pursuit
Zhou, Zihan; Wright, John; Candes, Emmanuel; Ma, Yi
2010-01-01
In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is...
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands
2009-01-01
We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entri...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions.......We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...
Empirical codon substitution matrix
Directory of Open Access Journals (Sweden)
Gonnet Gaston H
2005-06-01
Full Text Available Abstract Background Codon substitution probabilities are used in many types of molecular evolution studies such as determining Ka/Ks ratios, creating ancestral DNA sequences or aligning coding DNA. Until the recent dramatic increase in genomic data enabled construction of empirical matrices, researchers relied on parameterized models of codon evolution. Here we present the first empirical codon substitution matrix entirely built from alignments of coding sequences from vertebrate DNA and thus provide an alternative to parameterized models of codon evolution. Results A set of 17,502 alignments of orthologous sequences from five vertebrate genomes yielded 8.3 million aligned codons from which the number of substitutions between codons were counted. From this data, both a probability matrix and a matrix of similarity scores were computed. They are 64 × 64 matrices describing the substitutions between all codons. Substitutions from sense codons to stop codons are not considered, resulting in block diagonal matrices consisting of 61 × 61 entries for the sense codons and 3 × 3 entries for the stop codons. Conclusion The amount of genomic data currently available allowed for the construction of an empirical codon substitution matrix. However, more sequence data is still needed to construct matrices from different subsets of DNA, specific to kingdoms, evolutionary distance or different amount of synonymous change. Codon mutation matrices have advantages for alignments up to medium evolutionary distances and for usages that require DNA such as ancestral reconstruction of DNA sequences and the calculation of Ka/Ks ratios.
Matrix Embedded Organic Synthesis
Kamakolanu, U. G.; Freund, F. T.
2016-05-01
In the matrix of minerals such as olivine, a redox reaction of the low-z elements occurs. Oxygen is oxidized to the peroxy state while the low-Z-elements become chemically reduced. We assign them a formula [CxHyOzNiSj]n- and call them proto-organics.
Dominguez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne
2013-12-01
Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.
Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne
2013-01-01
Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.
Rolling Element Bearing Stiffness Matrix Determination (Presentation)
Energy Technology Data Exchange (ETDEWEB)
Guo, Y.; Parker, R.
2014-01-01
Current theoretical bearing models differ in their stiffness estimates because of different model assumptions. In this study, a finite element/contact mechanics model is developed for rolling element bearings with the focus of obtaining accurate bearing stiffness for a wide range of bearing types and parameters. A combined surface integral and finite element method is used to solve for the contact mechanics between the rolling elements and races. This model captures the time-dependent characteristics of the bearing contact due to the orbital motion of the rolling elements. A numerical method is developed to determine the full bearing stiffness matrix corresponding to two radial, one axial, and two angular coordinates; the rotation about the shaft axis is free by design. This proposed stiffness determination method is validated against experiments in the literature and compared to existing analytical models and widely used advanced computational methods. The fully-populated stiffness matrix demonstrates the coupling between bearing radial, axial, and tilting bearing deflections.
Matrix product states for gauge field theories.
Buyens, Boye; Haegeman, Jutho; Van Acoleyen, Karel; Verschelde, Henri; Verstraete, Frank
2014-08-29
The matrix product state formalism is used to simulate Hamiltonian lattice gauge theories. To this end, we define matrix product state manifolds which are manifestly gauge invariant. As an application, we study (1+1)-dimensional one flavor quantum electrodynamics, also known as the massive Schwinger model, and are able to determine very accurately the ground-state properties and elementary one-particle excitations in the continuum limit. In particular, a novel particle excitation in the form of a heavy vector boson is uncovered, compatible with the strong coupling expansion in the continuum. We also study full quantum nonequilibrium dynamics by simulating the real-time evolution of the system induced by a quench in the form of a uniform background electric field.
Many-Body Density Matrix Theory
Tymczak, C. J.; Borysenko, Kostyantyn
2014-03-01
We propose a novel method for obtaining an accurate correlated ground state wave function for chemical systems beyond the Hartree-Fock level of theory. This method leverages existing linear scaling methods to accurately and easily obtain the correlated wave functions. We report on the theoretical development of this methodology, which we refer to as Many Body Density Matrix Theory. This theory has many significant advantages over existing methods. One, its computational cost is equivalent to Hartree-Fock or Density Functional theory. Two it is a variational upper bound to the exact many-body ground state energy. Three, like Hartree-Fock, it has no self-interaction. Four, it is size extensive. And five, formally is scales with the complexity of the correlations that in many cases scales linearly. We show the development of this theory and give several relevant examples.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Accurate colorimetric feedback for RGB LED clusters
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate guitar tuning by cochlear implant musicians.
Directory of Open Access Journals (Sweden)
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Synthesizing Accurate Floating-Point Formulas
Ioualalen, Arnault; Martel, Matthieu
2013-01-01
International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...
Efficient Accurate Context-Sensitive Anomaly Detection
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.
Accurate Control of Josephson Phase Qubits
2016-04-14
61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate integration of forced and damped oscillators
García Alonso, Fernando Luis; Cortés Molina, Mónica; Villacampa, Yolanda; Reyes Perales, José Antonio
2016-01-01
The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An applica...
Accurate finite element modeling of acoustic waves
Idesman, A.; Pham, D.
2014-07-01
In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.
A new fast direct solver for the boundary element method
Huang, S.; Liu, Y. J.
2017-04-01
A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.
Dijkgraaf, Robbert; Verlinde, Erik; Verlinde, Herman
1997-02-01
Via compactification on a circle, the matrix mode] of M-theory proposed by Banks et a]. suggests a concrete identification between the large N limit of two-dimensional N = 8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.
Energy Technology Data Exchange (ETDEWEB)
Dijkgraaf, R. [Amsterdam Univ. (Netherlands). Dept. of Mathematics; Verlinde, E. [TH-Division, CERN, CH-1211 Geneva 23 (Switzerland)]|[Institute for Theoretical Physics, Universtity of Utrecht, 3508 TA Utrecht (Netherlands); Verlinde, H. [Institute for Theoretical Physics, University of Amsterdam, 1018 XE Amsterdam (Netherlands)
1997-09-01
Via compactification on a circle, the matrix model of M-theory proposed by Banks et al. suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states. (orig.).
Dijkgraaf, R; Verlinde, Herman L
1997-01-01
Via compactification on a circle, the matrix model of M-theory proposed by Banks et al suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.
Felder, G; Felder, Giovanni; Riser, Roman
2004-01-01
We study a class of holomorphic matrix models. The integrals are taken over middle dimensional cycles in the space of complex square matrices. As the size of the matrices tends to infinity, the distribution of eigenvalues is given by a measure with support on a collection of arcs in the complex planes. We show that the arcs are level sets of the imaginary part of a hyperelliptic integral connecting branch points.
Matrix groups for undergraduates
Tapp, Kristopher
2016-01-01
Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...
Directory of Open Access Journals (Sweden)
Pradeep K. Rohatgi
1993-10-01
Full Text Available This paper reviews the world wide upsurge in metal matrix composite research and development activities with particular emphasis on cast metal-matrix particulate composites. Extensive applications of cast aluminium alloy MMCs in day-to-day use in transportation as well as durable good industries are expected to advance rapidly in the next decade. The potential for extensive application of cast composites is very large in India, especially in the areas of transportation, energy and electromechanical machinery; the extensive use of composites can lead to large savings in materials and energy, and in several instances, reduce environmental pollution. It is important that engineering education and short-term courses be organized to bring MMCs to the attention of students and engineering industry leaders. India already has excellent infrastructure for development of composites, and has a long track record of world class research in cast metal matrix particulate composites. It is now necessary to catalyze prototype and regular production of selected composite components, and get them used in different sectors, especially railways, cars, trucks, buses, scooters and other electromechanical machinery. This will require suitable policies backed up by funding to bring together the first rate talent in cast composites which already exists in India, to form viable development groups followed by setting up of production plants involving the process engineering capability already available within the country. On the longer term, cast composites should be developed for use in energy generation equipment, electronic packaging aerospace systems, and smart structures.
Stage-structured matrix models for organisms with non-geometric development times
Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin
2009-01-01
Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...
Matrix Theory of Small Oscillations
Chavda, L. K.
1978-01-01
A complete matrix formulation of the theory of small oscillations is presented. Simple analytic solutions involving matrix functions are found which clearly exhibit the transients, the damping factors, the Breit-Wigner form for resonances, etc. (BB)
What do we know about neutrinoless double-beta decay nuclear matrix elements?
Menéndez, J
2016-01-01
The detection of neutrinoless double-beta decay will establish the Majorana nature of neutrinos. In addition, if the nuclear matrix elements of this process are reliably known, the experimental lifetime will provide precious information about the absolute neutrino masses and hierarchy. I review the status of nuclear structure calculations for neutrinoless double-beta decay matrix elements, and discuss some key issues to be addressed in order to meet the demand for accurate nuclear matrix elements.
Matrix Completions and Chordal Graphs
Institute of Scientific and Technical Information of China (English)
Kenneth John HARRISON
2003-01-01
In a matrix-completion problem the aim is to specifiy the missing entries of a matrix inorder to produce a matrix with particular properties. In this paper we survey results concerning matrix-completion problems where we look for completions of various types for partial matrices supported ona given pattern. We see that thc existence of completions of the required type often depends on thechordal properties of graphs associated with the pattern.
THE GENERALIZED POLARIZATION SCATTERING MATRIX
the Least Square Best Estimate of the Generalized Polarization matrix from a set of measurements is then developed. It is shown that the Faraday...matrix data. It is then shown that the Least Square Best Estimate of the orientation angle of a symmetric target is also determinable from Faraday rotation contaminated short pulse monostatic polarization matrix data.
Accurate measurement of unsteady state fluid temperature
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate diagnosis is essential for amebiasis
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
@@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Niche Genetic Algorithm with Accurate Optimization Performance
Institute of Scientific and Technical Information of China (English)
LIU Jian-hua; YAN De-kun
2005-01-01
Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.
Universality: Accurate Checks in Dyson's Hierarchical Model
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
The cellulose resource matrix.
Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G
2013-03-01
The emerging biobased economy is causing shifts from mineral fossil oil based resources towards renewable resources. Because of market mechanisms, current and new industries utilising renewable commodities, will attempt to secure their supply of resources. Cellulose is among these commodities, where large scale competition can be expected and already is observed for the traditional industries such as the paper industry. Cellulose and lignocellulosic raw materials (like wood and non-wood fibre crops) are being utilised in many industrial sectors. Due to the initiated transition towards biobased economy, these raw materials are intensively investigated also for new applications such as 2nd generation biofuels and 'green' chemicals and materials production (Clark, 2007; Lange, 2007; Petrus & Noordermeer, 2006; Ragauskas et al., 2006; Regalbuto, 2009). As lignocellulosic raw materials are available in variable quantities and qualities, unnecessary competition can be avoided via the choice of suitable raw materials for a target application. For example, utilisation of cellulose as carbohydrate source for ethanol production (Kabir Kazi et al., 2010) avoids the discussed competition with easier digestible carbohydrates (sugars, starch) deprived from the food supply chain. Also for cellulose use as a biopolymer several different competing markets can be distinguished. It is clear that these applications and markets will be influenced by large volume shifts. The world will have to reckon with the increase of competition and feedstock shortage (land use/biodiversity) (van Dam, de Klerk-Engels, Struik, & Rabbinge, 2005). It is of interest - in the context of sustainable development of the bioeconomy - to categorize the already available and emerging lignocellulosic resources in a matrix structure. When composing such "cellulose resource matrix" attention should be given to the quality aspects as well as to the available quantities and practical possibilities of processing the
Accurate Stellar Parameters for Exoplanet Host Stars
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Accurate pose estimation for forensic identification
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate pattern registration for integrated circuit tomography
Energy Technology Data Exchange (ETDEWEB)
Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.
2001-07-15
As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.
Accurate basis set truncation for wavefunction embedding
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
How Accurately can we Calculate Thermal Systems?
Energy Technology Data Exchange (ETDEWEB)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Accurate taxonomic assignment of short pyrosequencing reads.
Clemente, José C; Jansson, Jesper; Valiente, Gabriel
2010-01-01
Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.
Accurate determination of characteristic relative permeability curves
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Matrix string partition function
Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre
1998-01-01
We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.
Eisenman, Richard L
2005-01-01
This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur
Deift, Percy
2009-01-01
This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive
Supported Molecular Matrix Electrophoresis.
Matsuno, Yu-Ki; Kameyama, Akihiko
2015-01-01
Mucins are difficult to separate using conventional gel electrophoresis methods such as SDS-PAGE and agarose gel electrophoresis, owing to their large size and heterogeneity. On the other hand, cellulose acetate membrane electrophoresis can separate these molecules, but is not compatible with glycan analysis. Here, we describe a novel membrane electrophoresis technique, termed "supported molecular matrix electrophoresis" (SMME), in which a porous polyvinylidene difluoride (PVDF) membrane filter is used to achieve separation. This description includes the separation, visualization, and glycan analysis of mucins with the SMME technique.
Matrix algebra for linear models
Gruber, Marvin H J
2013-01-01
Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f
Matrix product states for lattice field theories
Energy Technology Data Exchange (ETDEWEB)
Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, H. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Tsukuba Univ., Ibaraki (Japan). Graduate School of Pure and Applied Sciences
2013-10-15
The term Tensor Network States (TNS) refers to a number of families of states that represent different ansaetze for the efficient description of the state of a quantum many-body system. Matrix Product States (MPS) are one particular case of TNS, and have become the most precise tool for the numerical study of one dimensional quantum many-body systems, as the basis of the Density Matrix Renormalization Group method. Lattice Gauge Theories (LGT), in their Hamiltonian version, offer a challenging scenario for these techniques. While the dimensions and sizes of the systems amenable to TNS studies are still far from those achievable by 4-dimensional LGT tools, Tensor Networks can be readily used for problems which more standard techniques, such as Markov chain Monte Carlo simulations, cannot easily tackle. Examples of such problems are the presence of a chemical potential or out-of-equilibrium dynamics. We have explored the performance of Matrix Product States in the case of the Schwinger model, as a widely used testbench for lattice techniques. Using finite-size, open boundary MPS, we are able to determine the low energy states of the model in a fully non-perturbativemanner. The precision achieved by the method allows for accurate finite size and continuum limit extrapolations of the ground state energy, but also of the chiral condensate and the mass gaps, thus showing the feasibility of these techniques for gauge theory problems.
Automated acoustic matrix deposition for MALDI sample preparation.
Aerni, Hans-Rudolf; Cornett, Dale S; Caprioli, Richard M
2006-02-01
Novel high-throughput sample preparation strategies for MALDI imaging mass spectrometry (IMS) and profiling are presented. An acoustic reagent multispotter was developed to provide improved reproducibility for depositing matrix onto a sample surface, for example, such as a tissue section. The unique design of the acoustic droplet ejector and its optimization for depositing matrix solution are discussed. Since it does not contain a capillary or nozzle for fluid ejection, issues with clogging of these orifices are avoided. Automated matrix deposition provides better control of conditions affecting protein extraction and matrix crystallization with the ability to deposit matrix accurately onto small surface features. For tissue sections, matrix spots of 180-200 microm in diameter were obtained and a procedure is described for generating coordinate files readable by a mass spectrometer to permit automated profile acquisition. Mass spectral quality and reproducibility was found to be better than that obtained with manual pipet spotting. The instrument can also deposit matrix spots in a dense array pattern so that, after analysis in a mass spectrometer, two-dimensional ion images may be constructed. Example ion images from a mouse brain are presented.
Status and Future of Nuclear Matrix Elements for Neutrinoless Double-Beta Decay: A Review
Engel, Jonathan
2016-01-01
The nuclear matrix elements that govern the rate of neutrinoless double beta decay must be accurately calculated if experiments are to reach their full potential. Theorists have been working on the problem for a long time but have recently stepped up their efforts as ton-scale experiments have begun to look feasible. Here we review past and recent work on the matrix elements in a wide variety of nuclear models and discuss work that will be done in the near future. Ab initio nuclear-structure theory, which is developing rapidly, holds out hope of more accurate matrix elements with quantifiable error bars.
Status and future of nuclear matrix elements for neutrinoless double-beta decay: a review
Engel, Jonathan; Menéndez, Javier
2017-04-01
The nuclear matrix elements that govern the rate of neutrinoless double beta decay must be accurately calculated if experiments are to reach their full potential. Theorists have been working on the problem for a long time but have recently stepped up their efforts as ton-scale experiments have begun to look feasible. Here we review past and recent work on the matrix elements in a wide variety of nuclear models and discuss work that will be done in the near future. Ab initio nuclear-structure theory, which is developing rapidly, holds out hope of more accurate matrix elements with quantifiable error bars.
Directory of Open Access Journals (Sweden)
Linda Christian Carrijo-Carvalho
2012-01-01
Full Text Available Lipocalin family members have been implicated in development, regeneration, and pathological processes, but their roles are unclear. Interestingly, these proteins are found abundant in the venom of the Lonomia obliqua caterpillar. Lipocalins are β-barrel proteins, which have three conserved motifs in their amino acid sequence. One of these motifs was shown to be a sequence signature involved in cell modulation. The aim of this study is to investigate the effects of a synthetic peptide comprising the lipocalin sequence motif in fibroblasts. This peptide suppressed caspase 3 activity and upregulated Bcl-2 and Ki-67, but did not interfere with GPCR calcium mobilization. Fibroblast responses also involved increased expression of proinflammatory mediators. Increase of extracellular matrix proteins, such as collagen, fibronectin, and tenascin, was observed. Increase in collagen content was also observed in vivo. Results indicate that modulation effects displayed by lipocalins through this sequence motif involve cell survival, extracellular matrix remodeling, and cytokine signaling. Such effects can be related to the lipocalin roles in disease, development, and tissue repair.
Meng, Deyu
2012-01-01
The low-rank matrix factorization as a L1 norm minimization problem has recently attracted much attention due to its intrinsic robustness to the presence of outliers and missing data. In this paper, we propose a new method, called the divide-and-conquer method, for solving this problem. The main idea is to break the original problem into a series of smallest possible sub-problems, each involving only unique scalar parameter. Each of these subproblems is proved to be convex and has closed-form solution. By recursively optimizing these small problems in an analytical way, efficient algorithm, entirely avoiding the time-consuming numerical optimization as an inner loop, for solving the original problem can naturally be constructed. The computational complexity of the proposed algorithm is approximately linear in both data size and dimensionality, making it possible to handle large-scale L1 norm matrix factorization problems. The algorithm is also theoretically proved to be convergent. Based on a series of experi...
Accurate Telescope Mount Positioning with MEMS Accelerometers
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Accurate Weather Forecasting for Radio Astronomy
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
Matrix sketching for big data reduction (Conference Presentation)
Ezekiel, Soundararajan; Giansiracusa, Michael
2017-05-01
Abstract: In recent years, the concept of Big Data has become a more prominent issue as the volume of data as well as the velocity in which it is produced exponentially increases. By 2020 the amount of data being stored is estimated to be 44 Zettabytes and currently over 31 Terabytes of data is being generated every second. Algorithms and applications must be able to effectively scale to the volume of data being generated. One such application designed to effectively and efficiently work with Big Data is IBM's Skylark. Part of DARPA's XDATA program, an open-source catalog of tools to deal with Big Data; Skylark, or Sketching-based Matrix Computations for Machine Learning is a library of functions designed to reduce the complexity of large scale matrix problems that also implements kernel-based machine learning tasks. Sketching reduces the dimensionality of matrices through randomization and compresses matrices while preserving key properties, speeding up computations. Matrix sketches can be used to find accurate solutions to computations in less time, or can summarize data by identifying important rows and columns. In this paper, we investigate the effectiveness of sketched matrix computations using IBM's Skylark versus non-sketched computations. We judge effectiveness based on several factors: computational complexity and validity of outputs. Initial results from testing with smaller matrices are promising, showing that Skylark has a considerable reduction ratio while still accurately performing matrix computations.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Matrix Quantization of Turbulence
Floratos, Emmanuel
2011-01-01
Based on our recent work on Quantum Nambu Mechanics $\\cite{af2}$, we provide an explicit quantization of the Lorenz chaotic attractor through the introduction of Non-commutative phase space coordinates as Hermitian $ N \\times N $ matrices in $ R^{3}$. For the volume preserving part, they satisfy the commutation relations induced by one of the two Nambu Hamiltonians, the second one generating a unique time evolution. Dissipation is incorporated quantum mechanically in a self-consistent way having the correct classical limit without the introduction of external degrees of freedom. Due to its volume phase space contraction it violates the quantum commutation relations. We demonstrate that the Heisenberg-Nambu evolution equations for the Matrix Lorenz system develop fast decoherence to N independent Lorenz attractors. On the other hand there is a weak dissipation regime, where the quantum mechanical properties of the volume preserving non-dissipative sector survive for long times.
Velasco, Pedro Pablo Perez
2008-01-01
This book objective is to develop an algebraization of graph grammars. Equivalently, we study graph dynamics. From the point of view of a computer scientist, graph grammars are a natural generalization of Chomsky grammars for which a purely algebraic approach does not exist up to now. A Chomsky (or string) grammar is, roughly speaking, a precise description of a formal language (which in essence is a set of strings). On a more discrete mathematical style, it can be said that graph grammars -- Matrix Graph Grammars in particular -- study dynamics of graphs. Ideally, this algebraization would enforce our understanding of grammars in general, providing new analysis techniques and generalizations of concepts, problems and results known so far.
Dimiev, Stancho; Stoev, Peter; Stoilova, Stanislava
2013-12-01
The notion of anticirculant is ordinary of interest for specialists of general algebra (to see for instance [1]). In this paper we develop some aspects of anticirculants in real function theory. Denoting by X≔x0+jx1+⋯+jmxm, xk∈R, m+1 = 2n, and jk is the k-th degree of the matrix j = (0100...00010...00001...0..................-1000...0), we study the functional anticirculants f(X)≔f0(x0,x1,...,xm)+jf1(x0,x1,...,xm)+⋯+jm-1fm-1(x0,x1,...,xm)+jmfm(x0,x1,...,xm), where fk(x0,x1,...,xm) are smooth functions of 2n real variables. A continuation for complex function theory will appear.
Energy Technology Data Exchange (ETDEWEB)
Hastings, Matthew B [Los Alamos National Laboratory
2009-01-01
We show how to combine the light-cone and matrix product algorithms to simulate quantum systems far from equilibrium for long times. For the case of the XXZ spin chain at {Delta} = 0.5, we simulate to a time of {approx} 22.5. While part of the long simulation time is due to the use of the light-cone method, we also describe a modification of the infinite time-evolving bond decimation algorithm with improved numerical stability, and we describe how to incorporate symmetry into this algorithm. While statistical sampling error means that we are not yet able to make a definite statement, the behavior of the simulation at long times indicates the appearance of either 'revivals' in the order parameter as predicted by Hastings and Levitov (e-print arXiv:0806.4283) or of a distinct shoulder in the decay of the order parameter.
Matrix membranes and integrability
Energy Technology Data Exchange (ETDEWEB)
Zachos, C. [Argonne National Lab., IL (United States); Fairlie, D. [University of Durham (United Kingdom). Dept. of Mathematical Sciences; Curtright, T. [University of Miami, Coral Gables, FL (United States). Dept. of Physics
1997-06-01
This is a pedagogical digest of results reported in Curtright, Fairlie, {ampersand} Zachos 1997, and an explicit implementation of Euler`s construction for the solution of the Poisson Bracket dual Nahm equation. But it does not cover 9 and 10-dimensional systems, and subsequent progress on them Fairlie 1997. Cubic interactions are considered in 3 and 7 space dimensions, respectively, for bosonic membranes in Poisson Bracket form. Their symmetries and vacuum configurations are explored. Their associated first order equations are transformed to Nahm`s equations, and are hence seen to be integrable, for the 3-dimensional case, by virtue of the explicit Lax pair provided. Most constructions introduced also apply to matrix commutator or Moyal Bracket analogs.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
Spherical membranes in Matrix theory
Kabat, D; Kabat, Daniel; Taylor, Washington
1998-01-01
We consider membranes of spherical topology in uncompactified Matrix theory. In general for large membranes Matrix theory reproduces the classical membrane dynamics up to 1/N corrections; for certain simple membrane configurations, the equations of motion agree exactly at finite N. We derive a general formula for the one-loop Matrix potential between two finite-sized objects at large separations. Applied to a graviton interacting with a round spherical membrane, we show that the Matrix potential agrees with the naive supergravity potential for large N, but differs at subleading orders in N. The result is quite general: we prove a pair of theorems showing that for large N, after removing the effects of gravitational radiation, the one-loop potential between classical Matrix configurations agrees with the long-distance potential expected from supergravity. As a spherical membrane shrinks, it eventually becomes a black hole. This provides a natural framework to study Schwarzschild black holes in Matrix theory.
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
Bayes’ Theorem, one must have a model y(x) that maps the state variables x (the solution in this case) to the measurements y. In this case, the unknown state variables are the configuration and composition of the heldup SNM. The measurements are the detector readings. Thus, the natural model is neutral-particle radiation transport where a wealth of computational tools exists for performing these simulations accurately and efficiently. The combination of predictive model and Bayesian inference forms the Data Integration with Modeled Predictions (DIMP) method that serves as foundation for this project. The cost functional describing the model-to-data misfit is computed via a norm created by the inverse of the covariance matrix of the model parameters and responses. Since the model y(x) for the holdup problem is nonlinear, a nonlinear optimization on Q is conducted via Newton-type iterative methods to find the optimal values of the model parameters x. This project comprised a collaboration between NC State University (NCSU), the University of South Carolina (USC), and Oak Ridge National Laboratory (ORNL). The project was originally proposed in seven main tasks with an eighth contingency task to be performed if time and funding permitted; in fact time did not permit commencement of the contingency task and it was not performed. The remaining tasks involved holdup analysis with gamma detection strategies and separately with neutrons based on coincidence counting. Early in the project, and upon consultation with experts in coincidence counting it became evident that this approach is not viable for holdup applications and this task was replaced with an alternative, but valuable investigation that was carried out by the USC partner. Nevertheless, the experimental 4 measurements at ORNL of both gamma and neutron sources for the purpose of constructing Detector Response Functions (DRFs) with the associated uncertainties were indeed completed.
Linearized supergravity from Matrix theory
Kabat, D; Kabat, Daniel; Taylor, Washington
1998-01-01
We show that the linearized supergravity potential between two objects arising from the exchange of quanta with zero longitudinal momentum is reproduced to all orders in 1/r by terms in the one-loop Matrix theory potential. The essential ingredient in the proof is the identification of the Matrix theory quantities corresponding to moments of the stress tensor and membrane current. We also point out that finite-N Matrix theory violates the Equivalence Principle.
Lectures on Matrix Field Theory
Ydri, Badis
The subject of matrix field theory involves matrix models, noncommutative geometry, fuzzy physics and noncommutative field theory and their interplay. In these lectures, a lot of emphasis is placed on the matrix formulation of noncommutative and fuzzy spaces, and on the non-perturbative treatment of the corresponding field theories. In particular, the phase structure of noncommutative $\\phi^4$ theory is treated in great detail, and an introduction to noncommutative gauge theory is given.
Matrix elements of unstable states
Bernard, V; Meißner, U -G; Rusetsky, A
2012-01-01
Using the language of non-relativistic effective Lagrangians, we formulate a systematic framework for the calculation of resonance matrix elements in lattice QCD. The generalization of the L\\"uscher-Lellouch formula for these matrix elements is derived. We further discuss in detail the procedure of the analytic continuation of the resonance matrix elements into the complex energy plane and investigate the infinite-volume limit.
MacKaay, M A
1996-01-01
In order to construct a representation of the tangle category one needs an enhanced R-matrix. In this paper we define a sufficient and necessary condition for enhancement that can be checked easily for any R-matrix. If the R-matrix can be enhanced, we also show how to construct the additional data that define the enhancement. As a direct consequence we find a sufficient condition for the construction of a knot invariant.
Accurate lineshape spectroscopy and the Boltzmann constant.
Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N
2015-10-14
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.
MEMS accelerometers in accurate mount positioning systems
Mészáros, László; Pál, András.; Jaskó, Attila
2014-07-01
In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.
Does a pneumotach accurately characterize voice function?
Walters, Gage; Krane, Michael
2016-11-01
A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.
Towards Accurate Modeling of Moving Contact Lines
Holmgren, Hanna
2015-01-01
A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...
Accurate upper body rehabilitation system using kinect.
Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit
2016-08-01
The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Noninvasive hemoglobin monitoring: how accurate is enough?
Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E
2013-10-01
Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
Accurate fission data for nuclear safety
Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S
2013-01-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.
Accurate thermoplasmonic simulation of metallic nanoparticles
Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing
2017-01-01
Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.
Matrix Models and Gravitational Corrections
Dijkgraaf, R; Temurhan, M; Dijkgraaf, Robbert; Sinkovics, Annamaria; Temurhan, Mine
2002-01-01
We provide evidence of the relation between supersymmetric gauge theories and matrix models beyond the planar limit. We compute gravitational R^2 couplings in gauge theories perturbatively, by summing genus one matrix model diagrams. These diagrams give the leading 1/N^2 corrections in the large N limit of the matrix model and can be related to twist field correlators in a collective conformal field theory. In the case of softly broken SU(N) N=2 super Yang-Mills theories, we find that these exact solutions of the matrix models agree with results obtained by topological field theory methods.
Energy Technology Data Exchange (ETDEWEB)
Dorey, Nick [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom); Tong, David [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom); Department of Theoretical Physics, TIFR,Homi Bhabha Road, Mumbai 400 005 (India); Stanford Institute for Theoretical Physics,Via Pueblo, Stanford, CA 94305 (United States); Turner, Carl [Department of Applied Mathematics and Theoretical Physics, University of Cambridge,Wilberforce Road, Cambridge, CB3 OWA (United Kingdom)
2016-08-01
We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large N limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.
Dorey, Nick; Turner, Carl
2016-01-01
We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large $N$ limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.
Energy Technology Data Exchange (ETDEWEB)
Drmac, Z. [Univ. of Colorado, Boulder, CO (United States). Dept. of Computer Science
1997-07-01
In this paper the author considers how to compute the singular value decomposition (SVD) A = U{Sigma}V{sup {tau}} of A = [a{sub 1}, a{sub 2}] {element_of} R{sup mx2} accurately in floating point arithmetic. It is shown how to compute the Jacobi rotation V (the right singular vector matrix) and how to compute AV = U{Sigma} even if the floating point representation of V is the identity matrix. In the case (norm of (a{sub 1})){sub 2} {much_gt} (norm of (a{sub 2})){sub 2}, underflow can produce the identity matrix as the floating point value of V, even for a{sub 1}, a{sub 2} that are far from being mutually orthogonal. This can cause loss of accuracy and failure of convergence of the floating point implementation of the Jacobi method for computing the SVD. The modified Jacobi method recommended in this paper can be implemented as a reliable and highly accurate procedure for computing the SVD of general real matrices whenever the exact singular values do not exceed the underflow or overflow limits.
Rapid cavity prototyping using mode matching and globalised scattering matrix
Shinton, I
2009-01-01
Cavity design using traditional mesh based numerical means (such as the finite element or finite difference methods) require large mesh calculations in order to obtain accurate values and cavity optimisation is often not achieved. Here we present a mode matching scheme which utilises a globalised scattering matrix approach that allows cavities with curved surfaces (i.e. cavities with elliptical irises and or equators) to be accurately simulated allowing rapid cavity prototyping and optimisation to be achieved. Results on structures in the CLIC main
Extended Matrix Variate Hypergeometric Functions and Matrix Variate Distributions
Directory of Open Access Journals (Sweden)
Daya K. Nagar
2015-01-01
Full Text Available Hypergeometric functions of matrix arguments occur frequently in multivariate statistical analysis. In this paper, we define and study extended forms of Gauss and confluent hypergeometric functions of matrix arguments and show that they occur naturally in statistical distribution theory.
Chan, Garnet Kin-Lic; Nakatani, Naoki; Li, Zhendong; White, Steven R
2016-01-01
Current descriptions of the ab initio DMRG algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab-initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational par...
Matrix expression of thermal radiative characteristics for an open complex
Institute of Scientific and Technical Information of China (English)
XU; Xiru; (徐希孺); FAN; Wenjie; (范闻捷); &; CHEN; Liangfu; (陈良富)
2002-01-01
The directionality of thermal radiance of a homogeneous isothermal non-black plane surface is totally decided by its directional emissivity, which depends on the complex dielectric constant and roughness of surface. It can be expressed by This paper proves that it is necessary to express emissivity by a matrix when a target becomes an inhomogeneous non-isothermal open complex with complicated inner geometric structure. The matrix describes the inner radiative interaction among components accurately and also expresses its thermal radiative directionality and structural characteristics completely. Advantages of matrix expression are as follows: first, the physical mechanics of effective emissivity of an open complex is described in a simple and perfect way; second, it becomes easy to understand the principle and method to retrieve components temperature from multi-angle thermal remotely sensed data; and third, the differences of directionalities between an open complex and a homogeneous isothermal non-black plane body are expressed by just using an effective emissivity matrix instead of an emissivity vector. Formula in classic physics is only the special case of matrix expression; therefore, the matrix is a universal unconditional expression to describe the directionality of thermal radiance.
Stage scoring of liver fibrosis using Mueller matrix microscope
Zhou, Jialing; He, Honghui; Wang, Ye; Ma, Hui
2016-10-01
Liver fibrosis is a common pathological process of varied chronic liver diseases including alcoholic hepatitis, virus hepatitis, and so on. Accurate evaluation of liver fibrosis is necessary for effective therapy and a five-stage grading system was developed. Currently, experienced pathologists use stained liver biopsies to assess the degree of liver fibrosis. But it is difficult to obtain highly reproducible results because of huge discrepancy among different observers. Polarization imaging technique has the potential of scoring liver fibrosis since it is capable of probing the structural and optical properties of samples. Considering that the Mueller matrix measurement can provide comprehensive microstructural information of the tissues, in this paper, we apply the Mueller matrix microscope to human liver fibrosis slices in different fibrosis stages. We extract the valid regions and adopt the Mueller matrix polar decomposition (MMPD) and Mueller matrix transformation (MMT) parameters for quantitative analysis. We also use the Monte Carlo simulation to analyze the relationship between the microscopic Mueller matrix parameters and the characteristic structural changes during the fibrosis process. The experimental and Monte Carlo simulated results show good consistency. We get a positive correlation between the parameters and the stage of liver fibrosis. The results presented in this paper indicate that the Mueller matrix microscope can provide additional information for the detections and fibrosis scorings of liver tissues and has great potential in liver fibrosis diagnosis.
Accurate paleointensities - the multi-method approach
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Optimizing cell arrays for accurate functional genomics
Directory of Open Access Journals (Sweden)
Fengler Sven
2012-07-01
Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.
Important Nearby Galaxies without Accurate Distances
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Ceramic matrix composite article and process of fabricating a ceramic matrix composite article
Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert
2016-01-12
A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.
Michelson, J
2004-01-01
The Matrix Theory that has been proposed for various pp wave backgrounds is discussed. Particular emphasis is on the existence of novel nontrivial supersymmetric solutions of the Matrix Theory. These correspond to branes of various shapes (ellipsoidal, paraboloidal, and possibly hyperboloidal) that are unexpected from previous studies of branes in pp wave geometries.
Jairam, Dharmananda; Kiewra, Kenneth A.; Kauffman, Douglas F.; Zhao, Ruomeng
2012-01-01
This study investigated how best to study a matrix. Fifty-three participants studied a matrix topically (1 column at a time), categorically (1 row at a time), or in a unified way (all at once). Results revealed that categorical and unified study produced higher: (a) performance on relationship and fact tests, (b) study material satisfaction, and…
Parallel Matrix Factorization for Binary Response
Khanna, Rajiv; Agarwal, Deepak; Chen, Beechung
2012-01-01
Predicting user affinity to items is an important problem in applications like content optimization, computational advertising, and many more. While bilinear random effect models (matrix factorization) provide state-of-the-art performance when minimizing RMSE through a Gaussian response model on explicit ratings data, applying it to imbalanced binary response data presents additional challenges that we carefully study in this paper. Data in many applications usually consist of users' implicit response that are often binary -- clicking an item or not; the goal is to predict click rates, which is often combined with other measures to calculate utilities to rank items at runtime of the recommender systems. Because of the implicit nature, such data are usually much larger than explicit rating data and often have an imbalanced distribution with a small fraction of click events, making accurate click rate prediction difficult. In this paper, we address two problems. First, we show previous techniques to estimate bi...
Formalization of Function Matrix Theory in HOL
Directory of Open Access Journals (Sweden)
Zhiping Shi
2014-01-01
Full Text Available Function matrices, in which elements are functions rather than numbers, are widely used in model analysis of dynamic systems such as control systems and robotics. In safety-critical applications, the dynamic systems are required to be analyzed formally and accurately to ensure their correctness and safeness. Higher-order logic (HOL theorem proving is a promise technique to match the requirement. This paper proposes a higher-order logic formalization of the function vector and the function matrix theories using the HOL theorem prover, including data types, operations, and their properties, and further presents formalization of the differential and integral of function vectors and function matrices. The formalization is implemented as a library in the HOL system. A case study, a formal analysis of differential of quadratic functions, is presented to show the usefulness of the proposed formalization.
Machining of Metal Matrix Composites
2012-01-01
Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...
Matrix Model Approach to Cosmology
Chaney, A; Stern, A
2015-01-01
We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...
Matrix convolution operators on groups
Chu, Cho-Ho
2008-01-01
In the last decade, convolution operators of matrix functions have received unusual attention due to their diverse applications. This monograph presents some new developments in the spectral theory of these operators. The setting is the Lp spaces of matrix-valued functions on locally compact groups. The focus is on the spectra and eigenspaces of convolution operators on these spaces, defined by matrix-valued measures. Among various spectral results, the L2-spectrum of such an operator is completely determined and as an application, the spectrum of a discrete Laplacian on a homogeneous graph is computed using this result. The contractivity properties of matrix convolution semigroups are studied and applications to harmonic functions on Lie groups and Riemannian symmetric spaces are discussed. An interesting feature is the presence of Jordan algebraic structures in matrix-harmonic functions.
An automated method for accurate vessel segmentation
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting
2017-05-01
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008
Quasinormal-mode expansion of the scattering matrix
Alpeggiani, Filippo; Verhagen, Ewold; Kuipers, L
2016-01-01
It is well-known that the quasinormal modes (or resonant states) of photonic structures can be associated with the poles of the scattering matrix of the system in the complex-frequency plane. In this work, the inverse problem, i.e., the reconstruction of the scattering matrix from the knowledge of the quasinormal modes, is addressed. We develop a general and scalable quasinormal-mode expansion of the scattering matrix, requiring only the complex eigenfrequencies and the far-field behaviour of the eigenmodes. The theory is validated by applying it to illustrative nanophotonic systems, showing that it provides an accurate first-principle prediction of the scattering properties, without the need for postulating ad-hoc nonresonant channels.
Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2013-01-01
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
Scientific articles recommendation with topic regression and relational matrix factorization
Institute of Scientific and Technical Information of China (English)
Ming YANG; Ying-ming LI; Zhongfei(Mark)ZHANG
2014-01-01
In this paper we study the problem of recommending scientifi c articles to users in an online community with a new perspective of considering topic regression modeling and articles relational structure analysis simultane-ously. First, we present a novel topic regression model, the topic regression matrix factorization (tr-MF), to solve the problem. The main idea of tr-MF lies in extending the matrix factorization with a probabilistic topic modeling. In particular, tr-MF introduces a regression model to regularize user factors through probabilistic topic modeling under the basic hypothesis that users share similar preferences if they rate similar sets of items. Consequently, tr-MF provides interpretable latent factors for users and items, and makes accurate predictions for community users. To incorporate the relational structure into the framework of tr-MF, we introduce relational matrix factorization. Through combining tr-MF with the relational matrix factorization, we propose the topic regression collective matrix factorization (tr-CMF) model. In addition, we also present the collaborative topic regression model with relational matrix factorization (CTR-RMF) model, which combines the existing collaborative topic regression (CTR) model and relational matrix factorization (RMF). From this point of view, CTR-RMF can be considered as an appropriate baseline for tr-CMF. Further, we demonstrate the efficacy of the proposed models on a large subset of the data from CiteULike, a bibliography sharing service dataset. The proposed models outperform the state-of-the-art matrix factorization models with a signifi cant margin. Specifi cally, the proposed models are effective in making predictions for users with only few ratings or even no ratings, and support tasks that are specifi c to a certain fi eld, neither of which has been addressed in the existing literature.
A localized basis that allows fast and accurate second order Moller-Plesset calculations
Energy Technology Data Exchange (ETDEWEB)
Subotnik, Joseph E.; Head-Gordon, Martin
2004-10-27
We present a method for computing a basis of localized orthonormal orbitals (both occupied and virtual), in whose representation the Fock matrix is extremely diagonal-dominant. The existence of these orbitals is shown empirically to be sufficient for achieving highly accurate MP@ energies, calculated according to Kapuy's method. This method (which we abbreviate KMP2), which involves a different partitioning of the n-electron Hamiltonian, scales at most quadratically with potential for linearity in the number of electrons. As such, we believe the KMP2 algorithm presented here could be the basis of a viable approach to local correlation calculations.
Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.
Tao, Jianmin; Mo, Yuxiang
2016-08-12
Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.